Edmonton Muttart Conservatory
IEEE site
IEEE Computer Society

Monday, September 8

1:30 PM - 2:10 PM

Keynote Speaker 1: Ioanis Nikolaidis, University of Alberta, Edmonton, Alberta, Canada

Title: "Listening to Noise (and making sense of it)"

Abstract: The research on Wireless Sensor Networks (WSNs) is often motivated through applications that involve monitoring of the natural environment far from, or rarely involving, human presence. The upshot of such deployments is that, apart from the whimsical nature of wireless propagation, the wireless channel is expected to be "quiet". On the contrary, urban environments, where every imaginable machine and electrical gadget might be in operation are unfriendly to WSNs because interference is ever-present and likely to become rampant in the future. Not all interference is due to communication devices. Microwave ovens, electrical motors, lighting systems, internal combustion engine ignition systems, etc. are some of the many faces of interference WSN deployments face in urban environments. All the same, in keeping with the WSN design philosophy of making nodes as inexpensive as possible, we do not wish to endow each node with elaborate physical layer capabilities beyond what one can find in off-the-shelf components. In other words, WSN nodes may have to learn to live amidst a sea of interference. Is there at least something we can do about it using information that the nodes are already capable of collecting?

 We review some of the interesting observations made with respect to interference based on data we collected in an urban indoor WSN, as well as other relevant experiments that have appeared in the literature. We find that, equipped with the bare minimum of (and inexpensive to conduct) observations, namely using the Received Signal Strength Indicator (RSSI) listening to the background noise, we can distinguish a handful of interference patterns. We therefore develop classification schemes for those patterns. We address some of the questions of how classification of interference can be performed accurately and in the small resource footprint of WSN nodes such that each node can, on its own, decide on the nature of the interference it is observing. We also explore a few ideas on how, once classified, interference can be exploited to the WSN nodes' advantage, and whether per-node classification and subsequent consensus across nodes is a useful strategy.

 A significant part of the presentation will be based on joint work with Nick Boers, Pawel Gburzynski, Aikaterini Vlachaki, and Janelle Harms.

Ioanis Nikolaidis

Speaker Bio: Ioanis Nikolaidis received his B.Sc from the University of Patras, Greece, in 1989, and subsequently his M.Sc. and Ph.D. from Georgia Tech, USA, in 1991 and 1994 respectively. He worked as a Research Scientist for ECRC GmbH (1994-1996) and joined the University of Alberta in 1997 at the rank of Assistant Professor, where he is now (since 2008) at the rank of Full Professor. He has supervised, or is currently supervising, a total of 13 Ph.D. and 13 M.Sc. students. He has published more than a hundred papers in refereed journals and conferences, 4 book chapters, and recently co-edited with Dr. Krzysztof Iniewski a book entitled "Building Sensor Networks: From Design to Applications". His research interests are in the general area of computer network protocol modeling and simulation, network protocol performance, and wireless sensor network architectures and applications. He is the co-recipient of the best paper award of CNSR 2011, recipient of the best paper presentation of ADHOC-NOW 2012, and co-recipient of the University of Alberta Teaching Unit Award as part of the the SmartCondo teaching team. He holds an Adjunct Professor appointment with the department of Occupational Therapy at the U. of Alberta. He was Area Editor for Computer Networks, Elsevier (2000-2010), and one of the most long-standing Editors (1999-2013) and Editor-in-Chief (2007-2009) for the IEEE Network magazine. He has co-chaired CNSR 2011, ADHOC-NOW 2004 & 2010. He serves as a steering committee member of WLN (annual workshop co-located with LCN) and of ADHOC-NOW. He served as an NSF Panel Member in 2010 and he belongs to the MITACS College of Reviewers (2007-). He has served as technical program committee member and reviewer for numerous conferences and journals, as well as for the following funding agencies: NSERC, NSF, Ontario Centers of Excellence, FWF/START (Austria), NTU/IntelliSys (Singapore), NWO (Holland). He is a member of IEEE and a lifetime member of ACM.

2:10 Boxing Experience: Measuring QoS and QoE of Multimedia Streaming Using NS3, LXC and VLC

Javier Bustos-Jiménez (NIC Chile Research Labs & Universidad de Chile, Chile); Camila Faúndez (NIC Chile Research Labs, Chile); Rodrigo Alonso (Universidad de Chile, Chile); Hugo Méric (INRIA Chile, Chile) Quality of Experience (QoE) has been standard as the overall acceptability of an application or service, as perceived subjectively by the end user including the complete end-to-end system effects. That means QoE may be influenced by the user expectations and context, adding a subjective component to measurements. Nevertheless, it has been studied that some metrics of Quality of Service (QoS), more related to the application thus nearest to the user' side, can be correlated with users evaluations (QoE) for multimedia transmission. In this article we propose that, following the same separation of concerns of the internet protocol suite, we can build a modular framework to study the relation of QoS and QoE metrics for multimedia transmission. This framework is called BoxingExperience and it is build with open source software (NS3 and VLC) and using Linux Containers (LXC). For testing our framework we show the performance of BoxingExperience in a new frame buffering/assignment algorithm, concluding that using BoxingExperience an scenario with multiple clients can be easily simulated in a typical desktop computer.

2:35 Identification of Network Measurement Challenges in OpenFlow-based Service Chaining

RajaRevanth Narisetty (University of Houston, USA); Deniz Gurkan (University of Houston, USA) Software-defined networking and Network Function Virtualization (NFV) have simplified the coordination effortsfor"service chaining." Consequently, network services such as firewall, load balancing, etc. may be service chained in the forwarding (data) plane for specific applications and/or traffic. A specific case is for the firewall rules that depend on deep packet inspection for application identification. If a particular application is identified and is "safe," would it be worthwhile to program the data plane to bypass the firewall for the duration of the application session? For such a traffic-steering case, we report measurement challenges on various setups and the related cost analysis based on the network delay. Measurements of the network and processing delay have been performed with virtualized resources, on GENI testbed, and with isolated hardware units. Experiences are also reported on how a commercial firewall virtual appliance has been deployed on the GENI testbed for experimentation. The results illustrate the measurement uncertainties and challenges for DPI-based traffic steering in virtualized environments. In addition, we show that such a service chaining may increase throughput and relieve DPI-based processing overhead on firewall units.

3:30 PM - 4:10 PM

Keynote Speaker 2: Dwight Makaroff, University of Saskatchewan, Saskatoon, Saskatchewan, Canada

Title: Network Performance Measurement for Real-Time Multiplayer Mobile Games

Abstract: Player satisfaction with real-time multiplayer mobile games is known to be directly correlated with performance of the communications network, particularly with variation in latency (jitter). The network is the most dynamic component of such games, and congestion and channel loss figure prominently in achieving the latency bounds required for real-time response. The upper bound on message delivery latency for a game to be considered playable varies with game type, but is typically less than 250 milliseconds. In many cases, the underlying network cannot reliably deliver messages within the required window and game developers must use a variety of predictive techniques to maintain the believability of the game experience. Additional complexity in game play requiring additional bandwidth and/or game state processing could be possible under favourable network conditions.

 In this talk, we describe our efforts to provide a light weight, embedded measurement framework that can be integrated into the game play experience at the application level. We implement these features on top of the industry standard UNITY-3D game engine, and deploy a test game over WiFi, Cellular, and Bluetooth networks. Particular measures of interest are the frame rate, one-way latency, and frame processing period within a gameplay session. The captured data can be used by game designers to tune game complexity and to manage predictive algorithm parameters. Game designers use these predictive algorithms to maintain an approximation of what occurs in real-time, despite delays from network transmission, and can use this information in providing game options that appropriately restrict the resource utilization to provide the maximum-sustainable-quality game experience. This provides the best opportunity to retain engaged players, who contribute to the data collection loop. The network measurements can also be of use to service providers as delay and congestion indications can be piggy-backed on game traffic packets. This enables their capacity planning with respect to quality of service for the user base.

 We will present results characterizing various Cellular environments to provide bounds on game designs feasible with current network technology as it is deployed in urban and rural areas around North America and Europe. As well, development and use of the performance models in game design techniques will be outlined as deployed in the multiplayer network game environment on mobile networks.

Dwight Makaroff

Speaker Bio: Dwight Makaroff received his B. Comm. and M. Sc. (Computational Science) from the University of Saskatchewan in 1985 and 1988, respectively. He taught at Bethel College, St. Paul, MN, and Trinity Western University, Langley, BC before returning to complete his Ph. D. in Computer Science from the University of British Columbia in 1998, where he designed and implemented a Variable Bit Rate Continuous Media Server. He was an Assistant Professor at the School of Information Technology and Engineering at the University of Ottawa from 1999 until 2001. He has been a faculty member in the Department of Computer Science at his alma mater, the University of Saskatchewan, since 2001, reaching the rank of Full Professor in 2012. He has supervised or co-supervised 16 graduate students (including current students) in the research area of performance of distributed systems, including multimedia protocols and servers, electronic commerce, sensor and mesh networks, and mobile and web applications from the application to the operating system level. He has published more than 35 papers in refereed journals and publications. He has served on numerous technical program committees and been a reviewer for a number of conferences and journals in the multimedia, network and systems areas.

4:10 On the Analysis of Backscatter Traffic

Eray Balkanli (Dalhousie University, Canada); Nur Zincir-Heywood (Dalhousie University, Canada) This work offers in-depth analysis of three different darknet datasets captured in 2004, 2006 and 2008 to provide insights into the nature of backscatter traffic. Moreover, we analyzed these datasets using two well-known open source intrusion detection systems (IDSs), namely Snort and Bro. Our analysis shows that there are interesting trends in these datasets that help us to understand backscatter traffic over a 4-year period of time. However, it also shows that it is challenging to identify the attacks that generated this traffic.

4:35 Annotating Network Trace Data for Anomaly Detection Research

Andreas Löf (University of Waikato, New Zealand); Richard Nelson (University of Waikato, New Zealand) Anomaly detection holds significant promise for automating network operations and security monitoring. Many detection techniques have been proposed. To evaluate and compare such techniques requires up to date datasets, useful truth data and the ability to record the outputs of the techniques in a common format. Existing datasets for network anomaly detection are either limited / aged or lacking in truth data. This paper presents a new annotation format allowing network datasets to be annotated with arbitrary event data. Use of the new format is demonstrated in a method to create new datasets that retain more information than a simple network capture. The supporting tools for the annotation format allow for incorporating events from multiple different sources. The ability to record and share network data and detected anomalies is a key component in moving anomaly detection research forward.