Celebrating Fifteen Years of COMSNETS

    

COMSNETS 2023

15th International Conference on COMmunication Systems & NETworkS

January 3 - 8 | Hybrid Conference
Chancery Pavilion Hotel, Residency Road, Bengaluru, India

Initiative by COMSNETS Association

In-Cooperation With
Technical Co-Sponsors
Conference Partners


Workshop on Machine Intelligence in Networked Data and Systems (MINDS)

Event Date: Sunday, 8 January 2023


Important Deadlines


Paper Submission Deadline 10th November 2022 (AoE)  20th November 2022 (AoE)
Notification of Acceptance 10th December 2022
Camera-ready Submission 15th December 2022
Workshop Date 8th January 2023

Schedule

Sunday, 8 January 2023
Venue: Indian Affairs
Time Title Speaker/Authors
9.30 : 9.45 Welcome by Workshop Chairs
9.35 : 10.05 Keynote 1: Deep Learning meets Software Engineering Dr. Aditya Kanade, Principal Researcher, Microsoft Research, India
10.05: 10.15 Keynote 1: Q&A
10.15 : 10.55 Keynote 2: Hammer vs. Gavel, or How I Learned to Stop Learning and Love the Old-Fashioned Algorithm Prof. Indranil Gupta, Professor, University of Illinois, Urbana-Champaign, USA
10.55: 11.05 Keynote 2: Q&A
11.05: 11.30 Tea Break
11.30: 12.00 Invited Talk 1: Learning Optimal Phase-Shifts of Holographic Metasurface Transceivers Dr. Manjesh Hanawal, Associate Professor, Industrial Engineering and Operations Research at the Indian Institute of Technology Bombay, Mumbai, India
12.00: 12.15 Paper 1: ZoneSync: Real-Time Identification of Zones in IoT-Edge Manish Kausik H, Jagnyashini Debadarshini, Himanshu Goyal, Sudipta Saha
12.15: 12.30 Paper 2: Scene Reconstruction and Trajectory Estimation in Hierarchical MANET Atrayee Gupta
12.30: 12.45 Paper 3: Financial Fake News Detection via Context-Aware Embedding and Sequential Representation using Cross-Joint Networks Padmapriya Mohankumar, Ashraf Kamal, Vishal Kumar Singh, Amrish Satish
12.45: 14.00 Lunch Break
14.00: 14.15 Paper 4: Estimating Task Completion Times for Network Rollouts using Statistical Models within Partitioning-based Regression Methods Venkatachalam Natchiappan, Shrihari Vasudevan, Thalanayar Muthukumar
14.15: 14.30 Paper 5: Network Intrusion Detection Through Machine Learning With Efficient Feature Selection Rohan Desai, Venkatesh Tiruchirai Gopalakrishnan
14.30: 14.45 Paper 6: Understanding Network Nodal Points for Emergency Services M Saravanan, V Rajagopalan, Divya Sachdeva
14.45: 15.00 Paper 7: SIRM: Cost efficient and SLO aware ML prediction on Fog-Cloud Network Chetan Phalak, Dheeraj Chahal, Rekha Singhal
15.00: 15.30 Tea Break
15.30 : 16.10 Keynote 3: Fast and Furious for your Hardest Data Analytics Tasks: From Serverful to Serverless Cloud Computing Dr. Somali Chaterji, Assistant Professor, Agricultural and Biological Engineering (ABE) & Elmore School of Electrical and Computer Engineering (ECE), Purdue University; CEO, KeyByte
16.10: 16.20 Keynote 3: Q&A
16.20: 16.50 Invited Talk 2: Continuous Time Bandits: A new learning model Dr. Rahul Vaze, Associate Professor, School of Technology and Computer Science, Tata Institute of Fundamental Research, Mumbai, India
16.50: 17.00 Concluding Remarks and Best Paper Award

Keynote Speakers

Somali Chaterji

Assistant Professor, Agricultural and Biological Engineering (ABE) & Elmore School of Electrical and Computer Engineering (ECE), Purdue University; CEO, KeyByte

Visit Homepage

For the majority of my talk, I will tell you about WiseFuse and Orion [Sigmetrics 2022, OSDI 2022], which performs end-to-end optimization of serverless DAG workflows, driven by our analysis of real serverless cloud computing workloads from Microsoft Azure. Serverless is a paradigm in cloud computing that has taken the world by storm as it enables users to “rent” resources in short time durations and takes away the tedium of cloud configuration. Our work shows how serverless computing can be used to run complex Machine Learning applications. Concretely, it introduces two optimizations: horizontal colocation or bundling of parallel invocations of a function and vertical fusion of in-series functions, while rightsizing the VMs hosting these functions.
In the latter part of my talk, I will introduce the principled approximation we are doing for computer vision applications [LiteReconfig, EuroSys 2022; SmartAdapt, CVPR 2022], which generate high rate of data streams and often need low latency analytics on the stream. My research is targeted at downsizing computation to fit within mobile and embedded devices, while satisfying client SLOs (service level objectives), focusing on video object detection and classification, semantic segmentation, and its operation in tandem with drones.

Somali Chaterji (pronounced shoh-MAH-lee CHA-ter-jee) is the CEO and co-founder of the academic startup, KeyByte, a blazing fast cloud computing company, and an Assistant Professor, specializing in data engineering and applied machine learning, in the Colleges of Engineering and Agriculture at Purdue University. She did her PhD in Biomedical Engineering from Purdue, winning the Chorafas Dissertation Award and the Future Faculty Fellowship. She followed this up with a postdoc in Biomedical Engineering from UT Austin and another postdoc in Computer Science from Purdue. She leads The Innovatory for Cells and Neural Machines (ICAN) at Purdue. ICAN innovates at the nexus of computer vision and mobile systems [Thrust 1 ], on the one hand, and at the interface of machine learning and genomics [Thrust 2], on the other.
Somali has received funding from NIH (R01), DOD (ARL), NSF (CISE-CNS, CAREER), USDA, as well as private industries like Amazon, Microsoft Azure, and Adobe Research. She serves on Program Committees of conferences in IoT, computer systems, machine learning, and computational genomics like Usenix ATC, EuroSys, HotStorage, Middleware, ICML, CVPR, and NeurIPS. She won the NSF-CAREER award from CISE in January 2022, which is shaping up its cyber nook here: https://schaterji.io/projects/sirius/
More: https://schaterji.io/extended-bio.html


Indranil Gupta

Professor, University of Illinois, Urbana-Champaign, USA

Visit Homepage

System designers constantly struggle with this question -- Should DNNs/RL be used to solve systems problems? This talk presents a specific study of one particular systems problem we addressed recently (model parallelism), and our lessons learned.

Indranil Gupta (Indy) is a Professor of Computer Science at the University of Illinois at Urbana-Champaign. He works on Distributed Systems + X, ranging from algorithms to design and implementation to production systems, across multiple areas of cloud/cluster computing, Edge, IoT, ML systems, and with collaborations in (X = ) verification, ML, HCI, etc. His work has won multiple Best Paper awards. Indy has worked at Google, IBM Research, and Microsoft Research. He has participated in multiple industry production systems, and his work has been adopted by companies small to large. Indy's popular podcast featuring interviews, called ''Immigrant Computer Scientists,'' is available free: http://csimmigrant.org/


Aditya Kanade

Principal Researcher, Microsoft Research, India

Visit Homepage

Deep learning has achieved remarkable breakthroughs in many areas such as NLP, vision and speech, by making sense of large-scale data. Software engineers work with and produce varied kinds of data, ranging from source code, code reviews, and execution traces to online discussion and QA forums such as StackOverflow. Can we achieve similar breakthroughs by applying deep learning to software engineering? Can we learn from large codebases to improve developer productivity and software quality?

In this talk, I will discuss the motivation and potential of deep learning for software engineering. I will discuss recent work on neural program repair, pre-trained models of source code and program synthesis using large language models. I will also broadly discuss applications and implications of deep learning for software engineering, and future prospects.

Aditya Kanade is a principal researcher at Microsoft Research India. He is interested in all aspects of building trustworthy, scalable and intelligent systems. His research has spanned the areas of artificial intelligence, formal methods, programming languages and software engineering. He is particularly fascinated by the prospect of designing deep-learning models that can write computer programs, solve complex tasks with multiple procedural or reasoning steps, and generalize reliability. He is also interested in problems at the intersection of AI/ML applications and large-scale systems. Before joining Microsoft Research, he was a full professor at the Indian Institute of Science (2009-2022). He also spent two years as a staff visiting researcher at Google Brain (2018-2020). He has received an ACM best paper award, a teaching excellence award and faculty awards from industry.


Invited Speakers

Rahul Vaze

Associate Professor, School of Technology and Computer Science, Tata Institute of Fundamental Research, Mumbai, India

Visit Homepage

We consider a continuous time multi-arm bandit problem (CTMAB), where the learner can sample arms any number of times in a given interval and obtain a random reward from each sample, however, increasing the frequency of sampling incurs an additive penalty/cost. Thus, there is a tradeoff between obtaining large reward and incurring sampling cost as a function of the sampling frequency. The goal is to design a learning algorithm that minimizes the regret, that is defined as the difference of the payoff of the oracle policy and that of the learning algorithm. CTMAB is fundamentally different than the usual multi-arm bandit problem (MAB), e.g., even the single arm case is non-trivial in CTMAB, since the optimal sampling frequency depends on the mean of the arm, which needs to be estimated. We establish lower and upper bounds that are tight up to logarithmic terms.

Rahul Vaze obtained his Ph.D. from The University of Texas at Austin in 2009. Currently he is an Associate Professor at the School of Technology and Computer Science, Tata Institute of Fundamental Research, Mumbai, India. His research interests are in communication networks, combinatorial resource allocation, online algorithms. He is the author of ''Random Wireless Networks'', Cambridge University Press, 2015. He is a co-recipient of the Eurasip best paper award for year 2010 for the Journal of Wireless Communication and Networking, the best paper award WiOpt 2020, Performance 2020, and best paper runners up award at WiOpt 2022.


Manjesh Hanawal

Associate Professor, Industrial Engineering and Operations Research at the Indian Institute of Technology Bombay, Mumbai, India,

Visit Homepage

Holographic metasurface transceivers (HMT) is an emerging technology for enhancing the coverage and rate of wireless communication systems. However, acquiring accurate channel state information in HMT-assisted wireless communication systems is critical for achieving these goals. We propose an algorithm to learn the optimal phase-shifts at an HMT in the far-field channel model. Our proposed algorithm exploits the structure of the channel gains in the far-field regions and learns the optimal phase-shifts from the noisy received signals. We prove that the probability that the optimal phase-shifts estimated by our proposed algorithm deviating from the true values decays exponentially in the number of pilot signals. Extensive numerical experiments validate the theoretical guarantees and also demonstrate significant gains as compared to the state-of-the-art policies. We also discuss how the unimodal properties of the received signal strength can be exploited to improve learning performance.

Manjesh Kumar Hanawal received the BTech degree in ECE from NIT, Bhopal, in 2004, the M.S. degree in ECE from the Indian Institute of Science, Bangalore, India, in 2009, and the Ph.D. degree from INRIA, Sophia Antipolis and University of Avignon, France, in 2013. After two years of postdoc at Boston University, he joined Industrial Engineering and Operations Research at the Indian Institute of Technology Bombay, Mumbai, India, where he is an associate professor now. During 2004-2007 he was with CAIR, DRDO, working on various security-related projects. His research interests include communication networks, machine learning, and cybersecurity. He is a recipient of Inspire Faculty Award from DST and the Early Career Research Award from SERB. He has received several research grants like MATRIX from SERB and Indo-French Collaborative Scientific Research Programme from CEFIPRA. His work received best paper award (honourable mention) at COMSNETS 2018.


Accepted Papers

  1. Estimating Task Completion Times for Network Rollouts using Statistical Models within Partitioning-based Regression Methods
    Venkatachalam Natchiappan, Shrihari Vasudevan, Thalanayar Muthukumar

  2. Financial Fake News Detection via Context-Aware Embedding and Sequential Representation using Cross-Joint Networks
    Padmapriya Mohankumar, Ashraf Kamal, Vishal Kumar Singh, Amrish Satish

  3. Network Intrusion Detection Through Machine Learning With Efficient Feature Selection
    Rohan Desai, Venkatesh Tiruchirai Gopalakrishnan

  4. Scene Reconstruction and Trajectory Estimation in Hierarchical MANET
    Atrayee Gupta

  5. SIRM: Cost efficient and SLO aware ML prediction on Fog-Cloud Network
    Chetan Phalak, Dheeraj Chahal, Rekha Singhal

  6. Understanding Network Nodal Points for Emergency Services
    M Saravanan, V Rajagopalan, Divya Sachdeva

  7. ZoneSync: Real-Time Identification of Zones in IoT-Edge
    Manish Kausik H, Jagnyashini Debadarshini, Himanshu Goyal, Sudipta Saha


As connectivity and storage are getting cheaper, we are seeing more opportunities for data-driven approaches for Networked Data and Systems. The adaptation of machine learning, artificial intelligence, and data analytics techniques in these networked systems is set to transform and disrupt many areas of business and everyday human life. The MINDS (Machine Intelligence in Networked Data and Systems) workshop (co-located with COMSNET 2023) aims to bring together researchers and practitioners to understand and explain this inter-working of machine learning, big data analytics, and networked systems for various application domains.

MINDS welcomes original research submissions that define challenges, report experiences, or discuss progress toward design and solutions that integrate machine learning, artificial intelligence, data analytics, deep learning, mobile systems, and networked systems in various application areas. These application areas include healthcare, environment, retail, transportation, life sciences, e-commerce, cloud services, etc. Contributions describing techniques applied to real-world problems and interdisciplinary research involving novel networking architectures, system designs, IoT systems, big data systems that use techniques from machine learning, artificial intelligence, deep learning, and data analytics as the core component are especially encouraged.

The topics of interest include but are not limited to:

Applications
  • Design and implementation of intelligent systems for applications such as home automation, self-driving vehicles, driver assistance systems, supply chain, and logistics
  • Cloud based machine and deep learning applications in retail and e-commerce
  • Machine learning systems for healthcare, weather modelling, life sciences, and environment monitoring
  • Machine learning in management of pandemics (e.g. Covid-19, Ebola, SARS)
  • Detection of fake news and control of spread of misinformation
Internet of Things (IoT)
  • Machine learning driven systems using mobile phones, embedded devices, and sensor networks
  • Applications of machine learning in IoT, IIoT, manufacturing, and supply chain optimisation
  • Experiences in managing wearable devices, smart-home systems and mobile sensor networks
  • Federated Learning and Distributed Learning for distributed computation and decision making
  • Learning with Noisy Labels and Adversarial Robustness
Networking
  • Root cause analysis and failure prediction using system and network logs
  • Applications of machine/deep/reinforcement learning in satellite networks, cellular networks and WiFi networks
  • Machine learning driven algorithms and tools for network anomaly detection. Privacy and and network security
  • Machine learning and data mining of large-scale network measurements
  • Stream-based machine learning for networked data
  • Machine learning driven algorithms for network scheduling and control
  • Challenges and solutions in IoT data and stream processing at the edge and in the cloud
  • High dimensional big data (images, videos) analysis using machine/deep learning
  • Scalability, privacy, and security of networked learning architectures
  • Distributed privacy-preserving algorithms, privacy attacks, and federated learning
Social Media Networks
  • Machine learning driven analysis of text, image, and video data on social media
  • Security, privacy, trust analysis, health analytics in social media and digital networks
  • Information diffusion modeling and inference, fake news detection, and knowledge transfer in social media and digital networks,
  • Anomaly and outlier detection in social networks
  • Computational models and agent-based simulations of social networks
  • Reinforcement learning, Inverse reinforcement learning, and other learning-based interventions for tackling misinformation spread.
  • Learning-based approaches to analyze COVID-related issues on social media.

Submission Guidelines

  • MINDS invites submission of original work not previously published, or under review at another conference or journal.
  • Submissions (including title, author list, abstract, all figures, tables, and references) must be no greater than 5 pages in length.
  • Reviews will be single-blind: authors name and affiliation should be included in the submission.
  • Submissions must follow the formatting guidelines as given on IEEE Website; and those that do not meet the size and formatting requirements will not be reviewed.
  • All papers must be in Adobe Portable Document Format (PDF) and submitted through the MINDS Workshop submission site on EDAS.
  • All workshop papers will appear in conference proceedings and be submitted to IEEE Xplore as well as other Abstracting and Indexing (A&I) databases.

Paper Submission Link: https://edas.info/N29766.


Technical Program Committee

  • Dr. Md Tanvir Amin, Google
  • Dr. Amy Babay, University of Pittsburgh
  • Dr. Himel Dev, Snapchat
  • Dr. Lei Huang, Microsoft
  • Dr. Rohit Kumar, DTU Delhi
  • Dr. Qiang Liu, University of Nebraska-Lincoln
  • Dr. Manuj Mukherjee, IIIT Delhi
  • Mr. Jayakrishnan Nair, IIT Bombay
  • Prof. Kaliappa Ravindran, City University of New York
  • Dr. Prem Singh, IIIT Bangalore
  • Dr. Rahul Singh, Indian Institute of Science
  • Dr. Jaya Sreevalsan-Nair, IIIT Bangalore
  • Prof. Lewis Tseng, Boston College
  • Prof. Md Yusuf Sarwar Uddin, University of Missouri-Kansas City
  • Dr. Le Xu, University of Texas at Austin

Workshop Co-Chairs

Sumit J Darak

Sumit J Darak

IIIT-Delhi, India

Muntasir Raihan Rahman

Muntasir Raihan Rahman

Microsoft Research, USA

Sirisha Rambhatla

Sirisha Rambhatla

University of Waterloo, Canada