NOISE

Network Operations and Internet Security @ UChicago


Leave a comment

Computer Networks 6th Edition Released

After several years of updates and edits, we have released a new version of Computer Networks (6th Edition). As I “grew up” reading early editions of this bible in computer networking, I was excited to have the opportunity to edit the latest edition.

The new edition of the book contains many updates, including:

  • An updated introduction chapter, which talks about many of the changes in the Internet’s structure and design over the past several years (including everything from software-defined networking, to 5G, to network neutrality).
  • A completely updated section on wireless and cellular networks.
  • Updates to the chapter on the Web, including new material on QUIC, HTTP/2, and modern web design.
  • A completely new chapter on Internet security, including new material on recent developments in DNS security and privacy (e.g., DNS over HTTPS).

Pick up a copy on Amazon or the Pearson website!


Leave a comment

Sam Burnett Thesis Defense on Internet Censorship

Congratulations to Sam Burnett, who successfully defended his dissertation entitled “Enabling bystanders to facilitate Internet Censorship Measurement and Circumvention”.

Sam’s work designs systems that allow third parties to contribute resources to both measure the extent of censorship and to circumvent it.  His work has been featured in New Scientist, Slashdot, and The Economist.  An abstract of his dissertation is below.  You can also view an archive of the defense here.  Congratulations, Sam!

Free and open exchange of information on the Internet is at risk: more than 60 countries practice some form of Internet censorship, and both the number of countries practicing censorship and the proportion of Internet users who are subject to it are on the rise. Understanding and mitigating these threats to Internet freedom is a continuous technological arms race between security researchers and advocates, and many of the most influential governments and corporations.

By its very nature, Internet censorship varies drastically from region to region, which has impeded nearly all efforts to observe and fight it on a global scale. Researchers and developers in one country may find it very difficult to study censorship in another; this is particularly true for those in North America and Europe attempting to study notoriously pervasive censorship in Asia and the Middle East.

This dissertation develops techniques and systems that empower users not affected by censorship, or bystanders, to assist in the measurement and circumvention of Internet censorship in other countries. Our work builds from the observation that there are people everywhere would be willing to help us if only they knew how. First, we develop Encore, which allows webmasters to help study Web censorship by collecting measurements from their sites’ visitors. Encore leverages weaknesses in cross-origin security policy to collect measurements from a far more diverse set of vantage points than previously possible. Second, we build Collage, a technique that allows users to leverage the pervasiveness and scalability of user-generated content hosting services to disseminate censored content. Collage’s novel communication model is robust against censorship that is significantly more powerful than governments use today. Together, Encore and Collage make it significantly easier for people everywhere to help study and circumvent Internet censorship around the world.


Leave a comment

Georgia Tech Researchers Meet at Google to Discuss Censorship Measurement

Researchers from Georgia Tech, the Tor Project, Stony Brook, Citizen Lab, and the Open Technology Institute at the New America Foundation met to design and prototype new tools for measuring Internet censorship.  The meeting was hosted by Google’s Measurement Lab.  Many current tools under development were presented and discussed, including:

  • Encore, a tool that uses third party measurements to determine Web accessibility
  • Centinel, a new platform for running cross-platform network interference measurements
  • MySpeedTest, an Android-based platform for measuring application performance
  • OONI, the open observatory of network interference
  • ICLab, a vision for a combined, cross-platform suite of censorship measurements

Meeting to Discuss Network Interference at Google, May 2014.

Meeting to Discuss Network Interference at Google, May 2014.


Leave a comment

“SDX: A Software Defined Internet Exchange Point” to Appear at SIGCOMM 2014

A paper on SDX will appear at SIGCOMM 2014 in August 2014.  Read more about the SDX project here.

SDX: A Software Defined Internet Exchange

Arpit Gupta (Georgia Institute of Technology)
Laurent Vanbever (Princeton University)
Muhammad Shahbaz (Georgia Institute of Technology)
Sean P. Donovan (Georgia Institute of Technology)
Brandon Schlinker (University of Southern California)
Nick Feamster (Georgia Institute of Technology)
Jennifer Rexford (Princeton University)
Scott Shenker (UC Berkeley)
Russ Clark (Georgia Institute of Technology)
Ethan Katz-Bassett (University of Southern California)

Abstract

BGP severely constrains how networks can deliver traffic over the Internet. Today’s networks can only forward traffic based on the destination IP prefix, by selecting among routes offered by their immediate neighbors. We believe Software Defined Networking (SDN) could revolutionize wide- area traffic delivery, by offering direct control over packet- processing rules that match on multiple header fields and perform a variety of actions. Internet eXchange Points (IXPs) are a compelling place to start, given their central role in interconnecting many networks and their growing importance in bringing popular content closer to end users. To realize a Software Defined IXP (an “SDX”), we must create compelling applications, such as “application-specific peering”—where two networks peer only for (say) streaming video traffic. We also need new programming abstractions that allow participating networks to create and run these applications and a runtime that both behaves correctly when interacting with BGP and ensures that applications do not interfere with each other. Finally, we must ensure that the system scales, both in rule-table size and computational overhead. In this paper, we tackle these challenges and demonstrate the flexibility and scalability of our solutions through controlled and in-the-wild experiments. Our experiments demonstrate that our SDX implementation can implement representative policies for hundreds of participants who advertise full routing tables while achieving sub-second convergence in response to configuration changes and routing updates.


Leave a comment

GT Noise Wins Departmental Awards

The lab was well represented at this year’s departmental awards: Russ Clark, Muhammad Shahbaz, and Sathya Gunasekaran all took home awards for their stellar work over the past year:

Russ Clark won the outstanding research scientist award.

Russ was nominated by people: Ron Hutchins, Beth Mynatt, and Nick Feamster.  Here is the text from the letters that was read at the ceremony:

Ron: “Russ is a person who gets things done. He is very much in demand as a partner for research in the new Software Defined Networking area.  He has successfully brought funding to GT that has shown GT as one of the top 5 universities in the country in this area.”

Beth: “Russ is a terrific research scientist in SCS and a highly valued colleague. His knowledge and ability to engage external industry partners is highly valued by me and others and has resulted in multiple sustained partnerships across a spectrum of topics.”

Nick: “Russ brings tremendous operational, practical experience to the research projects that he and I work on together—a practical viewpoint that has benefitted both my and my students’ research tremendously.  He provides the necessary gap between research and operations that today’s networking and system research desperately needs to be successful.”


Muhammad Shahbaz won the TA award for his work on the Coursera SDN MOOC.

Here is the text from Prof. Feamster’s letter that was read at the ceremony:

“I have found Shahbaz incredibly enthusiastic and diligent about teaching. I observed the incredibly long hours he dedicated to the Coursera MOOC—even though he received no official TA credit. His efforts were largely responsible for the success of the course’s assignments, which were successfully completed by nearly 4,000 students.  I also observed him enthusiastically and diligently answer questions from thousands of students. Everything he did as a TA, from designing assignments and quizzes to answering student’s questions and explaining concepts to smaller groups of students, was of the highest quality.”


Sathya Sunasekaran won the Donald V. Jackson fellowship for his work on Censorscope.

Here’s the text from Prof. Feamster’s letter that was read at the ceremony:

“Sathya is one of the most industrious and creative Masters students whom I have worked with at Georgia Tech.  He has a strong work ethic and, in a very short time, has gotten up to speed on a complex project on censorship and has even begun to take a leadership role on the project he is working on.”


Leave a comment

BISmark Paper to appear at USENIX Technical Conference

A paper describing the design, implementation, and deployment of BISmark, the testbed that we have built to measure and characterize home networks will appear at the USENIX Annual Technical Conference (ATC) in June.  The BISmark project began nearly four years ago as part of an effort to measure the performance of broadband access networks.  Since then, the platform has matured and now supports a variety of experiments and systems, including algorithms to characterize  home wireless networks to systems that accelerate Web performance.  We are also actively supporting experiments from other research groups as well as other deployments (e.g., the PAWS project in Cambridge).

The BISmark project page has more information on the project (including past papers and project contributors), and a pre-print of the BISmark paper, to appear in June 2014, is available here.

BISmark: A Testbed for Deploying Measurements and Applications in Broadband Access Networks

Srikanth Sundaresan, Sam Burnett, Nick Feamster
School of Computer Science, Georgia Tech

Walter de Donato
University of Napoli “Federico II”

BISmark (Broadband Internet Service Benchmark) is a deployment of home routers running custom software, and backend infrastructure to manage experiments and collect measurements. The project began in 2010 as an attempt to better understand the characteristics of broadband access networks. We have since deployed BISmark routers in hundreds of home networks in about thirty coun- tries. BISmark is currently used and shared by researchers at nine institutions, including commercial Internet service providers, and has enabled studies of access link performance, network connectivity, Web page load times, and user behavior and activity. Research using BISmark and its data has informed both technical and policy research. This paper describes and revisits design choices we made during the platform’s evolution and lessons we have learned from the deployment effort thus far. We also explain how BISmark enables experimentation, and our efforts to make it available to the networking community.

[Preprint]


Leave a comment

Nick Feamster Gives Keynote on “Research Revolutions” at CoNext

Nick Feamster presented a keynote talk on networking research revolutions, from packet switching to software-defined networking, at the CoNext conference in Santa Barbara.  The talk included the following highlights:

  • Normal science vs. revolutionary science
  • Two important revolutions: packet switching, and control-data plane separation
  • Methods for starting your own research revolution

The keynote talk was based on material that Feamster has designed for an “Intro to the Ph.D.” course he teaches a Georgia Tech.

Slides of the talk are available here.

conext


Leave a comment

Marshini Chetty Presents Paper on Broadband Performance in South Africa

Marshini Chetty presented our paper “Measuring Broadband Performance in South Africa” to the 4th ACM Symposium on Computing for Development (DEV).  The paper includes several new and important findings, including:

  • Fixed and mobile throughput does not achieve the rates advertised by ISPs (in contrast to countries such as the US, where performance more closely matches advertised rates).
  • Mobile throughput is consistently higher than fixed-line (e.g., DSL) throughput, although both throughput and latency are considerable more variable on mobile providers.
  • Latency to other destinations on the African continent can be quite high, due to Internet routes that “detour” through Internet exchange points (IXPs) in Europe (e.g., Amsterdam Internet exchange, London Internet exchange).

We are now in the process of repeating this study in other African countries, in collaboration with Research ICT Africa and Google.
photo 1-1


Leave a comment

Papers on IXP Connectivity in Africa, Filter Bubbles Accepted to PAM

Congratulations to Arpit Gupta and Xinyu Xing, who recently had papers accepted at the 2014 Passive and Active Measurements Conference!  Arpit’s paper studies connectivity and peering and what ISOC has been calling “tromboning” (paths on the continent that detour through LINX in London or AMS-IX in Amsterdam). Xinyu’s paper studies inconsistent Web search results using a tool we built called Bobble.

The abstracts of the accepted papers are below.  The final versions of the papers will be posted here shortly, and the papers will be presented in March 2014 in Los Angeles.

Peering at the Internet’s Frontier: A First Look at ISP Interconnectivity in Africa
Arpit Gupta (Georgia Institute of Technology), Matt Calder (University of Southern California), Nick Feamster (Georgia Institute of Technology), Marshini Chetty (University of Maryland, College Park), Enrico Calandro (Research ICT Africa), Ethan Katz-Bassett (University of Southern California)

Abstract. In developing regions, the performance to commonly visited destinations is dominated by the network latency to these destinations, which is in turn affected by the connectivity from ISPs in these regions to the locations that host popular sites and content. We take a first look at ISP interconnectivity between various regions in Africa and discover many circuitous Internet paths that should remain local often detour through Europe. We investigate the causes of circuitous Internet paths and evaluate the benefits of increased peering and better cache proxy placement for reducing latency to popular Internet sites.

Exposing Inconsistent Web Search Results with Bobble
Xinyu Xing (Georgia Institute of Technology), Wei Meng (Georgia Institute of Technology), Dan Doozan (Georgia Institute of Technology), Nick Feamster (Georgia Institute of Technology), Wenke Lee (Georgia Institute of Technology), Alex Snoeren (UC San Diego)

Abstract. Personalized Web search can potentially provide users with search results that are tailored to their geography, the device from which they are searching, and a variety of other preferences and predispositions. Although most major search engines employ some type of personalization, the algorithms used to implement this personalization remain a “black box” to users, who are not aware of the effects of these personalization algorithms on the results that they ultimately see. Indeed, many users may be unaware that such personalization is taking place at all. This papers take a first look at the nature of inconsistent search results that result from location-based personalization and search history. We present the design and implementation of Bobble, a tool that executes a single user query from a variety of different vantage points and under a range of different conditions and compared the consistency of the results that are returned from each query. Using more than 75,000 search queries from about 175 users over a nine-month period, we explore the nature of inconsistencies that arise in different search terms and regions and find that 98\% of all Google search queries from Bobble users resulted in some inconsistency, and that geography is more important than search history in influencing the nature of the inconsistency. Different from a recent study, our measurement also indicates that the influence of search history on search inconsistency is medium but not moderate. To demostrate the potential negative impact of search personalization, we also use Bobble to investigate more than 4,000 locally disreputable businesses. We find that more than 40 of these businesses for whom the negative search results are hidden from the local Google search result set but not in other Google search result sets obtained from other regions.