Invited Speakers
Title and Abstract of the Invited Talks
We consider a fundamental problem concerning the deployment of a wireless robotic network: to fulfill various end-to-end performance requirements, a sufficient number of robotic relays must be deployed to ensure that links are of acceptable quality. Prior work has not addressed how to find this number. We use the properties of Carrier Sense Multiple Access (CSMA) based wireless communication derive an upper bound on the spacing between any transmitter-receiver pair, which directly translates to a lower bound on the number of robots to deploy. We focus on SINR-based performance requirements due to their wide applicability. Next, we show that the bound can be improved by exploiting the geometrical structure of a network, such as linearity in the case of flow-based robotic router networks. Furthermore, we also use the bound on robot count to formulate a lower bound on the number of orthogonal codes required for a high probability of interference free communication. We demonstrate and validate our proposed bounds through simulations.
Speaker's Bio : Bhaskar Krishnamachari is a Professor and Ming Hsieh Faculty Fellow in Electrical Engineering, and Director of the Autonomous Networks Research Group at the University of Southern California's Viterbi School of Engineering. He works on the design and analysis of algorithms and protocols for next-generation wireless networks, including low power embedded networks, vehicular networks, and robotic networks. His co-authored papers have received best paper awards at IPSN (2004, 2010), MSWiM (2006) and MobiCom (2010), a best paper runner-up at SECON (2012), and a top-three paper at MSWiM (2014). He has received the NSF CAREER award (2004), the ASEE Terman Award (2010), and has been included on Technology Review Magazine's TR-35 list (2011), and Popular Science's Brilliant 10 list (2015). He authored a book titled Networking Wireless Sensor published by Cambridge University Press in 2005.
The notion of edge computing introduces new computing functions away from centralized locations and closer to the network edge and thus> facilitating new applications and services. This enhanced computing paradigm is provides new opportunities to applications developers, not available otherwise. In this talk, I will discuss why placing computation functions at the extreme edge of our network infrastructure, i.e., in wireless Access Points and home set-top boxes, is particularly beneficial for a large class of emerging applications. I will discuss a specific approach, called ParaDrop, to implement such edge computing functionalities, and use examples from different domains -- smarter homes, sustainability, and intelligent transportation -- to illustrate the new opportunities around this concept.
Speaker's Bio : Suman Banerjee is an Professor in Computer Sciences at UW-Madison where he is the founding director of the WiNGS laboratory which broadly focuses on research in wireless and mobile networking systems. He received his undergraduate degree from IIT Kanpur, and MS and PhD degrees from the University of Maryland. He is the inaugural recipient of the ACM SIGMOBILE Rockstar award and a recipient of the NSF Career Award. He is a recipient of multiple award papers at various conferences, such as ACM MobiCom, ACM CoNEXT, and IEEE Dyspan. He is currently serving as the chair of ACM SIGMOBILE.
The deployment scale of IoT networks is anticipated to be orders of magnitude higher than mobiles that make up today's fringe Internet. To enable deployment at this scale, the deployment process must be as automated as possible. This talk will describe a GIS-enabled, measurement-aided, automated relay placement framework to enable fringe nodes (sensor nodes or motes) to connect wirelessly to a target data-fusion centre, and its extension to deal with the challenges of a heterogeneous fading propagation environment.
Speaker's Bio : Rajesh Sundaresan is a Professor at the Department of Electrical Communication Engineering and an Associate Faculty of the Robert Bosch Centre for Cyber Physical Systems. He received his B.Tech. from IIT Madras, M.A. and Ph.D. from Princeton University, worked on the WCDMA and HSDPA systems at Qualcomm during 1999-2005, and has been at IISc since 2005. His current research is on communication, computation, and control over networks, with application to the Internet of Things and cyber physical systems.
In order to move from the current “illness”-driven model to a “wellness”-driven model in healthcare, one needs to build affordable, easily usable and mass deployable solutions. This is particularly true for developing countries like India. In this talk we look at early detection and screening for lifestyle diseases like coronary artery disease (CAD), diabetes and hypertension using mobile phones and low-cost attachments to mobile phones followed by signal processing and machine learning based analytics. We also look at creating an affordable tele-home-care based rehabilitation therapy solution for stroke patients using Kinect to help in diagnosis, assessment and therapy compliance. We present results on pilot studies done on patients in India and also on open datasets.
IoT enabled smart environments typically include large number of simple sensors that are designed to detect specific events. In many environments, however, not one but combinations of sensor events represent activities of interest (such as activities of daily living of a patient in a smarthome, student engagement in a classroom, etc.). Detecting and monitoring these activities of interest result in both application-specific benefits and operational benefits. However, human activities often overlap, thereby making activity detection from the collected sensor events a challenging problem. In this talk, I will first present the various benefits of such materialization of sensor events into activities at a higher level of abstraction, and then discuss the challenges in detecting overlapping activities, and possible approaches to de-multiplex them. More interestingly, the diversity of human activities and the time-variability of a given activity by the same human, makes reliable detection of activities even harder, and open up interesting avenues for future research.
Speaker's Bio : Ravi Kokku is a research staff member and manager of Cognitive Learning Platforms research at IBM T. J. Watson Research Center, USA. Before that, he was a research scientist and manager of Telecom Infrastructure and Platforms group at IBM Research, India, and was a researcher at NEC Laboratories America, Princeton. He obtained his M.S. and Ph.D. in Computer Sciences from The University of Texas at Austin, and B.Tech in Computer Sciences and Engineering from Indian Institute of Technology, Kharagpur. Over the last 11 years of industry research, his work has focused on robust resource mangement solutions in various domains—including IoT environments, Telecom and enterprise WiFi networks, packet processing systems and wide-area replication systems.
The RF spectrum is typically monitored from asingle, or few, vantage points. A larger spatio-temporal view of spectrum occupancy, such as over a few weeks on a city-widescale, would be beneficial for several applications, for example, spectrum inventory by regulators or spectrum monitoring by wireless carriers. However, achieving such a view requires a dense deployment of spectrum analyzers, both in space and time, which is prohibitively expensive.In this paper, we present a novel efficient approach to obtain an accurate extrapolated spatio-temporal view of spectrum occupancy. Our method uses RSSI measurements alone and does not require apriori information of terrain, transmitter location, transmit power or path-loss model. We present our method as an algorithmic framework, called SpectraMap, which through targeted deployment of both static and mobile spectrum analyzers, gives a view of the spectrum occupancy over both time and space. We contrast SpectraMap’s accuracy with that of Kriging (an accepted well performing method of RSSI spatial extrapolation) through simulations and present RSSI mapconstruction savings achieved through actual deployment on alarge university campus. Finally, we draw a theoretical distinction between SpectraMap and relevant contemporary solutions in the fields of space-time RSSI maps and spectrum management
Built–in cameras on mobile and wearable devicesenable a number of vision related applications, such as mobileaugmented reality, continuous sensing, and life-logging systems.While wearable cameras with smaller size and higher resolutionbring joy and convenience to human lives, being recorded byunreliable cameras has raised people’s concerns about visualprivacy, particularly the potential leak of identity. Consequently,protecting identity of people who are not willing to appear inthe photo or video has become an urgent issue that has yet to beresolved. In this paper, we propose a novel interactive methodfor individuals to control their visual privacy and we implementthe prototype on Android smartphones. It allows individualsto inform cameras of their privacy control intentions throughinteraction using gestures and tags. Corresponding processingsuch as blurring the face is performed to remove individual’sidentifiable information. We have conducted experiments underdifferent conditions to demonstrate the effectiveness and usabilityof our approach. Our interactive visual privacy control methodtakes advantage of interaction between human and device,opening new ways for individuals to control their privacy.
In addition to position sensing, GPS receivers can beleveraged for orientation sensing too. We place multiple GPSreceivers on drones and translate their relative positions intoorientation. Such an orthogonal mode of orientation sensingprovides failsafe under Inertial sensor failures – a primary causeof drone crashes today. This paper integrates GLONASS satellitemeasurements with GPS for enhancing the orientation accuracy. Accurate estimate of orientation depends upon high precisionrelative positioning of the GPS receivers. While GPS carrierphases provide high precision ranging data, the phases are noisyand wrap after every wavelength which introduces ambiguity.Moreover, GPS signals experience poor SNR and loss of satellitelocks under aggressive flights. This can severely limit both theaccuracy and the amount of carrier phase data available. Fortu-nately, integrating the ubiquitously available Russian GLONASSsatellites with GPS can double the amount of observations andsubstantially improve the robustness of orientation estimates.However, the fusion is non-trivial because of the operationaldifference between FDMA based GLONASS and CDMA basedGPS. This paper proposes a temporal differencing scheme forfusion of GLONASS and GPS measurements, through a systemcalledSafetyNet. Results from 11 sessions of 5-7 minute flightsreport median orientation accuracies of2±even under overcastweather conditions
Applications belonging to the emerging domains of recognition, mining, synthesis, graphics, and computer vision often exhibit the property of intrinsic application resilience that enables them to produce outputs of acceptable quality even when some of their underlying computation are performed in anapproximate or in exact manner. Approximate computing is a new and promising design paradigm that exploits this application resilience to substantially improve the energy consumption and performance of computing systems that execute them. This talk presents a brief overview and sampling of hardware and software techniques proposed for approximate computing, followed by a discussion of our recent work that calls for the adoption of a full-system perspective while designing approximate computing systems to maximize energy benefits.
The last few years have witnessed the coming ofage of data-driven paradigm in various aspects of computing(partly) empowered by advances in distributed system research(cloud computing, MapReduce, etc). In this paper, we observe that the benefits can flow the opposite direction: the design and management of networked systems can be improved by data-driven paradigm. To this end, we present DDN, a new design framework for network protocols based on data-driven paradigm. We argue that DDN has the potential to significantly achieve better performance through harnessing more data than one single flow. Furthermore, we systematize existing instantiations of DDN by creating a unified framework for DDN, and use the framework to shed light on the common challenges and reusable design principles. We believe that by systematizing this paradigm as abroader community, we can unleash the unharnessed potential of DDN
Canes or service dogs in indoor environments areunable to provide spatial information to the Visually ImpairedPersons (VIPs) to make them independent. An indoor navigationassistance system can provide information on the presence ofany obstacles in their vicinity, the distance of separation andtheir direction of motion (in case of mobile objects) w.r.t theVIPs. In this paper, we attempt to address the above objectiveby designing a novel time efficient algorithm where a smart-glassis employed to spot an obstacle (stationary ormobile) in indoorenvironment using the inbuilt camera and inertial sensors. Thesystem is implemented and tested extensively in indoor settings.
As we move into the Digital Era, organizations seeking to make digital transformation must also make network transformation. What are the key outcomes businesses look to enable in the digital era? How do enterprise network architectures need to evolve for enabling the business outcomes and new business models? In this session, we will review the key problem statements and use cases in support of enabling business outcomes in the digital era. We must evolve the network to move beyond connectivity to a platform of Insights, Automation and Security. We will review the role of machine learning and deep learning algorithms to deliver context-aware network experiences. We will do a demo of how we put Insights, Automation and Security to work with a real-life use case and key takeaways.