COMSNETS 2025

17th International Conference on COMmunication Systems & NETworkS

January 6 - 10
Chancery Pavilion Hotel, Residency Road, Bengaluru, India

Initiative by COMSNETS Association


In-Cooperation With
Technical Co-Sponsors
Conference Partners


Invited Speakers

Takuro Yonezawa

Nagoya University, Japan

Visit Homepage
Talk Date: 8th Jan, 2025

The pace at which changes in the real world and their impacts are expanding is increasing, driven by the COVID-19 pandemic, military conflicts across various countries, and large-scale natural disasters such as earthquakes and typhoons. Simultaneously, advancements in AI technologies, exemplified by large language models, are continuously expanding their applicability. The trends in open hardware and open software are further reducing the time lag between research and societal implementation. Thus, the concept of "fluid reality," where yesterday's reality may not apply today, is expected to progress further. Moreover, real-space sensing using IoT sensors and LiDAR is becoming more advanced and affordable. This enables digital-twins, which provide digital replicas and future simulations of physical spaces, and high-immersion virtual reality technologies, which offer highly immersive experiences. These developments are shaping a new reality that interlinks physical and informational spaces, thereby increasing the diversity of reality. To capture, connect, and enhance mutual understanding among people, and to create new value from these diverse and multifaceted realities, new perspectives and various technological developments are necessary. In this talk, I will introduce and discuss the vision and challenges of our Internet of Realities project.

Takuro Yonezawa is an associate professor in Graduate School of Engineering, Nagoya University, Japan. He received Ph.D. degree in the Media and Governance from Keio University in 2010. His research interests are the intersection of the distributed systems, human-computer interaction and sensors/actuators technologies. He also lead several smart city projects as a technical coordinator, such as FP7/NICT European-Japanese collaborative research project (ClouT project), NICT social big data project, MIC G-Space city project, and so on. Currently, he is leading Internet of Realities project under JST CREST and JST RISTEX funding. He was awarded IBM Ph.D. Fellowship Award (2009), IPSJ Yamashita SIG Research Award (2013/2019), IPSJ/IEEE Computer Society Young Computer Researcher Award(2021) and so on.


Nitinder Mohan

Delft University of Technology (TU Delft), Netherlands

Visit Homepage
Talk Date: 7th Jan, 2025

Starlink, with its expansive constellation size, is fundamentally transforming global Internet connectivity — positioning itself as a potential "global ISP" that can end global Internet monopolies and close connectivity gaps in remote and disaster-affected areas. Prior investigations into Starlink's network performance have been region-specific and lack a holistic understanding of its global operations. In this talk, I will unveil findings from our extensive longitudinal study of Starlink's network performance which provides a global overview of Starlink's bandwidth and latency metrics, identifying key factors that influence its service quality. Additionally, I will explore Starlink's dynamic reconfiguration of its satellite links and examine the growing chasm between space-based Internet access and traditional terrestrially-designed networking technologies.

Nitinder Mohan is an Assitant Professor at the Delft University of Technology (TU Delft), Netherlands. Prior to this, he was a senior researcher in the Chair of Connected Mobility at Technical University of Munich, Germany. He received his Ph.D. from University of Helsinki and M.Tech. from IIIT Delhi which was awarded the "Outstanding Ph.D. Dissertation Award 2020" by the IEEE Technical Committee of Scalable Computing (TCSC) . His research interests are in edge computing, orchestration systems, next-generation networked systems, satellite networks, multipath transort, and wide-scale Internet measurements. He also the co-founder and co-organizer of IEEE PerFail and ACM LEO-NET workshop series.


Rama K Govindaraju

Senior Engineering Director, Nvidia

Visit Homepage
Talk Date: 7th Jan, 2025

Building large scale Deep Learning AI infrastructure is the new arms race. The expectation is that substantial breakthroughs in many grand challenge problems such as (a) usable AGI in all areas; (b) protein folding; (c) predicting diseases; (d) cure for diseases such as cancer, among others are limited only due to lack of access to large of AI infrastructure; Rapid progress has been made in the last decade on developing accelerators for AI and developing large scale infrastructure to support these emerging breakthroughs in Deep Learning based AI. However developing such infrastructure and growing them for future needs must first overcome substantial challenges currently being expereienced with (a) reliability issues; (b) seamless scalability; (c) access to cheap power, water, and the need for more robust and reliable distributed infrastructure. This talk will focus on these challenges and the opportunities that exist in creative solutions to address them.

Rama is currently the Senior Engineering Director at Nvidia, California where he leads the DGX Cloud Performance and Architecture team. Earlier Rama built and led the team defining the methodology to inform the architecture and design of ~10 generations of servers, 5 generations of TPUs (Tensor Processing Unit for AI/ML workloads), Video transcoding accelerators, storage systems, all deployed at scale in Google data centers. Responsible for the efficient operation of the fleet from an end to end perspective enabling the virtuous cycle of HW-SW co-design. Defined the strategic vision for many critical aspects adopted at Google. Prior to that at IBM, Rama was a Distinguished Engineer and HPC Software Architect and led software architecture for 5 generations of Supercomputers at IBM.
Rama has a MS and PhD in Computer Science from Rensselaer Polytechnic Institute in New York and a BS in Computer Science from BIT Mesra, Ranchi, India.


Rahul Chatterjee

University of Wisconsin Madison, US

Visit Homepage
Talk Date: 8th Jan, 2025

Smart home devices, also known as IoT devices, such as Internet-connected thermostats, door locks, and item trackers, are revolutionizing our personal spaces by offering unparalleled convenience and automation. While these innovations undoubtedly enhance our lives, they also introduce significant privacy and safety risks. This talk focuses on an often-overlooked dimension of these risks—how smart home technologies intersect with intimate partner violence (IPV).
I will present findings from our recent research on how abusive partners exploit smart home devices to spy on, stalk, and harass their victims, as well as how survivors are navigating and coping with these emerging forms of abuse. The accessibility of surveillance technologies has made the situation worse. Hidden cameras and microphones are readily available from major U.S. retailers like Amazon and Best Buy, while existing defensive tools for detecting such devices remain largely ineffective. Moreover, devices not traditionally considered surveillance tools—such as smart thermostats and smart speakers—are increasingly being weaponized to control or intimidate survivors.
Addressing this growing issue presents several critical challenges for ensuring safety and privacy in smart home environments. I hope to inspire the research community to tackle these challenges and contribute to mitigating the risks posed by the misuse of smart home technologies.

I design secure systems to make digital technologies safe and secure for everyone. My research methodology combines empiricism with analytical techniques. Some of my recent research includes: designing secure and usable authentication systems, securing private data in trigger-action platforms, mitigating abuse of smart home devices by an abusive intimate partner, and building security mindset in CS undergraduate students. My CV can be found here. Also check out our Madison Security & Privacy group. I am always looking for self-motivated students who are interested in working on real-world digital security and privacy problems. If you have a relevant research idea and want to collaborate please drop me a note.

With the help of a number of motivated students, we run Madison Tech Clinic (MTC) to support survivors of domestic and intimate partner violence who are experiencing technology-faciliated abuse. This is an initiative similar to CETA in NYC. We are looking for passionate volunteers for MTC. Please reach out to me if you are interested.


Samarjit Chakraborty

University of North Carolina Chapel Hill, US

Visit Homepage
Talk Date: 8th Jan, 2025

Many Cyber-Physical System (CPS), such as autonomous vehicles and robots, rely on compute intensive Machine Learning (ML) algorithms, especially for perception processing. A growing trend is to implement such ML algorithms in the cloud. However, issues like data transfer overhead, data loss during communication, and the delay introduced by the communication with the cloud necessitate some form of hybrid edge-cloud solution. Here, a part of the processing is done locally and the rest on the cloud, and how to do this partitioning is being explored in the body of work referred to as Split Computing (SC). In this talk we will discuss different SC architectures and their implications on controller design for CPS.

Samarjit Chakraborty is a Kenan Distinguished Professor at the Department of Computer Science at UNC Chapel Hill. Prior to coming here, he was a professor of Electrical Engineering at the Technical University of Munich in Germany, where he held the Chair of Real-Time Computer Systems. Before that he was an assistant professor of Computer Science at the National University of Singapore. He obtained his PhD from ETH Zurich in 2003. His research interests cover all aspects of designing hardware and software for embedded computers. He is a Fellow of the IEEE and received the 2023 Humboldt Professorship Award from Germany.


Sarani Bhattacharya

IIT Kharagpur, India

Visit Homepage
Talk Date: 8th Jan, 2025

The evolution of computer architecture has taken place through several inventions of sophisticated and ingenious techniques, like out-of-order execution, caching mechanism, branch-prediction, speculative execution, and a host of other optimizations to maximize throughput and enhance performance. While it is imperative to imbibe and develop these artifacts in our modern-day machines, it is equally necessary to understand the security threats posed by these mechanisms, particularly on the execution of cryptographic programs operating on sensitive data. However, with the growing impetus of security in applications where modern computing finds usage, these optimizations need a closer investigation. As the foremost criteria of these architectural components have been performance, a multitude of microarchitectural attacks have been unearthed, which exploit information leakage due to the functioning of these artifacts. Beginning with an examination of fundamental concepts in micro-architecture, the research elucidates the role of hardware in shaping the security posture of a system. It explores various vulnerabilities inherent in micro-architectural elements and analyzes their implications on overall system security.

Sarani Bhattacharya is an Assistant Professor in the Department of Computer Science and Engineering, IIT Kharagpur. Before joining IIT Kharagpur she was working in imec, Belgium in the domain of High Performance Computing. She received her Ph.D from IIT Kharagpur in the domain of micro-architectural security. She also has a post-doctoral experience in Hardware security from COSIC, KU Leuven. Her teaching and research experiences are in the topics of Computer Architecture, Computer Security, Micro-architectural advancements, and their inherent security implications.


Shubham Agarwal

Adobe Research

Visit Homepage
Talk Date: 8th Jan, 2025

Text-to-image generation using diffusion models has seen explosive popularity owing to their ability in producing high quality images adhering to text prompts. However, diffusion-models go through a large number of iterative denoising steps, and are resource-intensive, requiring expensive GPUs and incurring considerable latency. In this paper, we introduce a novel approximate-caching technique that can reduce such iterative denoising steps by reusing intermediate noise states created during a prior image generation. Based on this idea, we present an end-to-end text-to-image generation system, NIRVANA, that uses approximate-caching with a novel cache management policy to provide 21% GPU compute savings, 19.8% end-to-end latency reduction, and 19% dollar savings on two real production workloads. We further present an extensive characterization of real production text-to-image prompts from the perspective of caching, popularity and reuse of intermediate states in a large production environment. Link to paper: https://www.usenix.org/conference/nsdi24/presentation/agarwal-shubham

Shubham is currently working as a pre-doctoral researcher at Adobe Research, India, with a primary focus on optimizing machine learning systems for large-scale training and deployment. His expertise includes inference optimization for production systems for LLMs and text-to-image models, efficient scheduling and resource management for generative models, and improving the efficiency of generative models at a hardware level. Shubham completed his bachelor's degree in computer science from BITS Pilani in 2022 and he plans to apply for a PhD next fall.


Arpit Agarwal

Indian Institute of Technology, Bombay

Visit Homepage
Talk Date: 7th Jan, 2025

Defining an appropriate objective function is a cornerstone of the AI development pipeline, guiding models to achieve desired outcomes. However, in practice, the true objective function is often either poorly defined or impractical to optimize. This necessitates reliance on proxy objectives, which may diverge from the true goals, leading to unintended consequences and suboptimal solutions.
This talk explores this challenge in the context of recommender systems which are pervasive AI-based systems used by online platforms to filter content for their users. While the intended objective of recommender systems is to maximize user utility, the true measure of user utility often remains unknown. Consequently, platforms typically resort to engagement metrics such as likes, shares, and watch time as proxies. However, optimizing these metrics can misalign recommendations with user utility, resulting in clickbait, harmful, or otherwise undesirable content.
To address this issue, we introduce a novel framework that infers user utility through return probability—a robust indicator of sustained user satisfaction—rather than engagement metrics. Leveraging a generative Hawkes process model, we disentangle short-term, impulsive (System-1) behaviors from long-term, utility-driven (System-2) behaviors. This approach enables us to optimize recommendations for meaningful utility, aligning system objectives with user goals over the long term. By redesigning the objective function to align better with user utility, we pave the way for more responsible and trustworthy recommender systems.

Arpit Agarwal is an Assistant Professor at the CSE Department at IIT Bombay. His research lies in the area of machine learning (ML) and artificial intelligence (AI). His focus is on human-centered AI which includes learning from human feedback, understanding AI impact on individuals and society, and designing socially responsible AI. Prior to joining IIT Bombay, he was a researcher at FAIR Labs (Meta). Before that he was a postdoctoral fellow at the Data Science Institute at Columbia University. He completed his PhD from the CIS Department at University of Pennsylvania.


Akanksha Agrawal

Indian Institute of Technology, Madras

Visit Homepage
Talk Date: 7th Jan, 2025

Clustering is a family of problems that aims to group a given set of objects in a meaningful way—the exact “meaning” may vary based on the application. These are fundamental problems in Computer Science with applications ranging across multiple fields like pattern recognition, machine learning, computational biology, bioinformatics and social science. Often real world data is contaminated with a small amount of noise and these noises can substantially change the clusters that we obtain using an algorithm. To circumvent such issues, there are several studies of clusterings in the presence of outliers. In this talk we will look at a general approach of obtaining the so-called fixed parameter tractable algorithm for clustering with outliers, that achieves approximation ratios almost matching their outlier-free counterpart.


Travel Grants Sponsors


Platinum Sponsor

Gold Sponsors


Silver Sponsors

Bronze Sponsors







Conference Partners