Mon, 07/14/2008 - 13:34 by Olivier Bonaventure • Categories:
The current Internet was designed in the 1970s to allow researchers to access remote computers. Since then, the Internet has grown tremendously, both in number of users as well as in number of supported services. Some of the architectural choices made in the 1970s for a small research network are not optimised for today's global commercial Internet. The IP Networking Lab participates actively in the development of the architecture of the future Internet.
Sat, 04/28/2007 - 01:14 by Damien Leroy • Categories:
The Bloom filter is a data structure that was introduced in 1970 and that has been adopted by the networking research community in the past decade thanks to the bandwidth efficiencies that it offers for the transmission of set membership information between networked hosts. A sender encodes the information into a bit vector, the Bloom filter, that is more compact than a conventional representation. Computation and space costs for construction are linear in the number of elements. The receiver uses the filter to test whether various elements are members of the set. Though the filter will occasionally return a false positive, it will never return a false negative. When creating the filter, the sender can choose its desired point in a trade-off between the false positive rate and the size.
Sat, 04/28/2007 - 01:16 by Damien Leroy • Categories:
Currently, broadband wireless access is gaining a great deal of interest from the networking research community. Particularly, the recently standardized WiMAX is going to serve as a wireless extension or alternative to cable and DSL for broadband access. Particularly for end users in rural, sparsely populated areas or in areas where laying cable is difficult or expensive. WiMAX will provide a new broadband access path to Internet. But companies and communities along will benefit from WiMAX as well, if they require mobile networks that cover a wider area than Wi-Fi.
Sat, 04/28/2007 - 00:37 by Damien Leroy • Categories:
Systems for active measurements in the internet are undergoing a radical shift. Whereas the present generation of systems operates on largely dedicated hosts, numbering between 20 and 200, a new generation of easily downloadable measurement software means that infrastructures based on thousands of hosts could spring up literally overnight. Unless carefully controlled, these new systems have the potential to impose a heavy load on parts of the network that are being measured. They also have the potential to raise alarms, as their traffic can easily resemble a distributed denial of service (DDoS) attack. Our research aims at examining the problem, and proposing and evaluating an algorithm for controlling one of the most common forms of active measurement: traceroute.
Mon, 07/02/2007 - 11:44 by Olivier Bonaventure • Categories:
The Internet is a very large scale complex system whose properties are not entirely completely known. However, a detailed knowledge of the Internet is key to understand the mechanisms and protocols that work well and those that could be improved.
Sun, 07/15/2007 - 17:57 by Bruno Quoitin • Categories:
Building increasingly precise and realistic network topologies is an important issue for the purpose of evaluating networking applications. Still, the generation of router-level topologies has not been widely covered. Though, the properties of the router-level topologies have a significant impact on simulation results. For instance, the evaluation of applications such as Voice/Video over IP, P2P, routing protocols and traffic engineering methods critically depends on the properties captured by the topology model.
Thu, 05/10/2007 - 23:27 by Olivier Bonaventure • Categories:
Routing protocols play a key role in large networks such as the Internet because they allow to build and update the forwarding tables used by routers to forward packets to their destinations. Although the main routing protocols (OSPF, IS-IS, BGP, PIM, ...) have been proposed and implemented several years ago, there is still a lot of room for improvement. We are working on two different threads related to those protocols. A first thread is to develop and implement models such as C-BGP to allow to perform large scale simulations. A second thread is to improve various aspects of the performance of those protocoles such as their convergence time or their scalability.