New Wireless Image Tech Intuitively Filters Data

Abstract

Recently, semantic communications have drawn great attention as the groundbreaking concept surpasses the limited capacity of Shannon's theory. Specifically, semantic communications probably become crucial in realizing visual tasks that demand massive network traffic. Although highly distinctive forms of visual semantics exist for computer vision tasks, a thorough investigation of what visual semantics can be transmitted in time and which one is required for completing different visual tasks has not yet been reported. To this end, we first scrutinize the achievable throughput in transmitting existing visual semantics through the limited wireless communication bandwidth. In addition, we further demonstrate the resulting performance of various visual tasks for each visual semantic. Based on the empirical testing, we suggest a task-adaptive selection of visual semantics is crucial for real-time semantic communications for visual tasks, where we transmit basic semantics (e.g., objects in the given image) for simple visual tasks, such as classification, and richer semantics (e.g., scene graphs) for complex tasks, such as image regeneration. To further improve transmission efficiency, we suggest a filtering method for scene graphs, which drops redundant information in the scene graph, thus allowing the sending of essential semantics for completing the given task. We confirm the efficacy of our task-adaptive semantic communication approach through extensive simulations in wireless channels, showing more than 45 times larger throughput over a naive transmission of original data.

A new AI-driven technology developed by researchers at UNIST promises to significantly reduce data transmission loads during image transfer, paving the way for advancements in autonomous vehicles, remote surgery and diagnostics, and real-time metaverse rendering-applications that demand rapid, large-scale visual data exchange without delay.

Led by Professor Sung Whan Yoon from the Graduate School of Artificial Intelligence at UNIST, the research team announced the development of 'Task-Adaptive Semantic Communication,' an innovative wireless image transmission method that selectively transmits only the most essential semantic information relevant to the specific task.

Current wireless image transmission methods compress entire images without considering their underlying semantic structures-such as objects, layout, and relationships-resulting in bandwidth limitations and transmission delays that hinder real-time high-resolution image sharing.

In contrast, the new technology intelligently filters and transmits only the critical semantic components necessary for the task. For example, if the goal is simply to classify objects within an image, only information about the objects-like 'Cat' or 'Car'-is sent. However, if the task involves generating a detailed image, the system transmits additional data on object arrangements and their relationships, such as 'Cat wearing hat' or 'Man is sitting on a chair.'

Furthermore, the team employed a semantic filtering algorithm that removes redundant information-such as universally true statements like 'Man has head' or duplicated data like 'Pole is in hand' and 'Man is holding pole'-thus reducing unnecessary data transmission while preserving the essential context for the task.

Simulation results demonstrate that this approach achieves up to 45 times higher transmission efficiency compared to conventional methods, enabling real-time visual task execution even across various wireless channel conditions.

Professor Yoon commented, "Future wireless communication will focus not just on accurately transmitting data, but on meaningfully transmitting information." He further noted, "This research marks a pivotal step in transforming intelligent wireless communication."

First author Jeonghun Park added, "This technology is expected to support critical applications such as autonomous vehicle perception systems, remote medical procedures, and real-time metaverse rendering, where large-scale visual data must be exchanged swiftly and reliably."

The findings of this research were published in the IEEE Journal on Selected Areas in Communications (JSAC), one of the top-tier journals in the field of communications, on October 20, 2025.

This research was supported by the Ministry of Science and ICT (MSIT), the Institute for Information & Communications Technology Planning & Evaluation (IITP), the Ministry of Health and Welfare (MOHW), and the National Research Foundation of Korea (NRF).

Journal Reference

Jeonghun Park and Sung Whan Yoon, "Transmit What You Need: Task-Adaptive Semantic Communications for Visual Information," JSAC, (2025).

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.