Oral: 20 mins including Q&A. An HDMI port is used to connect a projector.
Poster: A0 paper portrait (841mm width × 1189mm height). Magnets and tapes will be provided.
July 17th (Wed)
1:30-3:00: Oral Session Wed-PM, 6E/6F (6th floor), Chair: Junya Hara (Osaka University)
Diagonal preconditioning of a primal-dual splitting algorithm for graph signal processing: Kazuki Naganuma, Shunsuke Ono (Tokyo Institute of Technology)
IRREGULARITY AWARE GRAPH SIGNAL INTERPOLATION: Darukeesan Pakiyarajah, Eduardo Pavez, Antonio Ortega (University of Southern California)
Dynamic Sensor Placement on Graphs Based on Sampling Theory and Online Dictionary Learning: Saki Nomura (Tokyo University of Agriculture and Technology), Junya Hara, Hiroshi Higashi, Yuichi Tanaka (Osaka University)
Interpretable Lightweight Transformer via Unrolling of Learned Graph Smoothness Priors: Tam Thuc Do, Parham Eftekhar, Seyed Alireza Hosseini, Gene Cheung (York University), Philip Chou (packet.media)
3:30-4:10: Invited Talk Wed-PM, 6E/6F (6th floor), Chair: Shunsuke Ono (Tokyo Institute of Technology)
Sensor placement problem on networks from theories to applications: Junya Hara (Osaka University)
Abstract: This talk introduces the topic of sensor placement problems in networks. In applications, sensors form a network, i.e., a sensor network, to efficiently communicate and monitor the surrounding environment. However, it is often infeasible to observe measurements from all nodes due to constraints such as the limited number of sensors and energy consumption. Consequently, it is crucial to select the most "informative" sensors from the available candidates. A key criterion for evaluating the goodness of sensors is their ability to infer data from non-observed nodes using observations from selected sensors, which is closely related to graph sampling theory. This criterion has prompted diverse studies across various disciplines and applications. This presentation reviews these studies from a comprehensive perspective.
4:30-5:30: IEEE Signal Processing Society DL Talk, 6E/6F (6th floor), Chair: Toshihisa Tanaka (Tokyo University of Agriculture and Technology) (Open for everyone without registration to GSAL) Registration link only for the DL talk
Interpretable Convolutional NNs and Graph CNNs: Role of Domain Knowledge: Danilo Mandic (Imperial College London)
Abstract: The success of deep learning (DL) and convolutional neural networks (CNN) has also highlighted that NN-based analysis of signals and images of large sizes poses a considerable challenge, as the number of NN weights increases exponentially with data volume – the so called Curse of Dimensionality. In addition, the largely ad-hoc fashion of their development, albeit one reason for their rapid success, has also brought to light the intrinsic limitations of CNNs - in particular those related to their black box nature. To this end, we revisit the operation of CNNs from first principles and show that their key component – the convolutional layer – effectively performs matched filtering of its inputs with a set of templates (filters, kernels) of interest. This serves as a vehicle to establish a compact matched filtering perspective of the whole convolution-activation-pooling chain, which allows for a theoretically well founded and physically meaningful insight into the overall operation of CNNs. This is shown to help mitigate their interpretability and explainability issues, together with providing intuition for further developments and novel physically meaningful ways of their initialisation. Such an approach is next extended to Graph CNNs (GCNNs), which benefit from the universal function approximation property of NNs, pattern matching inherent to CNNs, and the ability of graphs to operate on nonlinear domains. GCNNs are revisited starting from the notion of a system on a graph, which serves to establish a matched-filtering interpretation of the whole convolution-activation-pooling chain within GCNNs, while inheriting the rigour and intuition from signal detection theory. This both sheds new light onto the otherwise black box approach to GCNNs and provides well-motivated and physically meaningful interpretation at every step of the operation and adaptation of GCNNs. It is our hope that the incorporation of domain knowledge, which is central to this approach, will help demystify CNNs and GCNNs, together with establishing a common language between the diverse communities working on Deep Learning and opening novel avenues for their further development.
July 18th (Thu)
10:30-12:00: Oral Session Thu-AM, Keizo Saji Memorial Hall (10th floor), Chair: Cheng Yang (Shanghai University of Electric Power)
Denoising for event-based cameras based on graph spectral features: Shimpei Harada, Junya Hara, Hiroshi Higashi, Yuichi Tanaka (Osaka University)
CONSTRUCTING AN INTERPRETABLE DEEP DENOISER BY UNROLLING GRAPH LAPLACIAN REGULARIZER: Seyed Alireza Hosseini, Tam Thuc Do, Gene Cheung (York University), Yuichi Tanaka (Osaka University)
SwinGNN: Rethinking Permutation Invariance in Diffusion Models for Graph Generation: Qi Yan (University of British Columbia, Vector Institute for AI), Zhengyang Liang (Tongji University), Yang Song (OpenAI), Renjie Liao (University of British Columbia, Vector Institute for AI), Lele Wang (University of British Columbia)
Graph-Based Temporally-Guided Total Variation for Robust Spatiotemporal Fusion of Satellite Images: Ryosuke Isono, Shunsuke Ono (Tokyo Institute of Technology)
1:30-2:30: Plenary Talk #1, Keizo Saji Memorial Hall (10th floor), Chair: Gene Cheung (York University)
Designing Application-dependent Graph Fourier Transforms: Antonio Ortega (University of Southern California)
Abstract: Defining elementary frequency modes is key to graph signal processing (GSP). In conventional signal processing, Fourier transforms provide representations with well-understood properties (oscillatory behavior, time-frequency localization) for all scenarios of interest. Instead, in GSP, standard definitions of frequency derived from graph spectra have very different behaviors depending on the graph (e.g., regular vs irregular) or the normalization choice. Moreover, standard frequencies may be difficult to obtain for certain directed graphs and may have limited interpretability. We present new definitions of GFTs that can address these concerns. For undirected graphs, our new designs are based on letting the choice of the inner product be a function of the application and/or graph. We demonstrate their application for perceptual image coding and graph filterbank design. For directed graphs, we show that the spectral decomposition of the adjacency matrix leads to interpretable frequency mode definitions.
3:00-3:40: Invited Talk Thu-PM, Keizo Saji Memorial Hall (10th floor), Chair: Shogo Muramatsu (Niigata University)
Graph-based Processing and Learning for 3D imaging: Jin Zeng (Tongji University)
Abstract: Integrating advanced 3D sensors with image restoration algorithms is crucial for achieving high-quality 3D imaging. Although recent deep learning techniques have shown impressive performance in 3D image restoration, they often suffer from limitations in depth accuracy and generalization ability, hindering the progress of the 3D sensing industry. This talk introduces our recent research progress in integrating graph signal processing with deep learning to address these issues. On the one hand, we develop data-driven graph learning for accurate modeling of the irregular 3D images, which enhances the capability of graph-based prior knowledge. On the other hand, we utilize graph-based optimization algorithms to design deep neural networks, which reduces the data dependence and provides graph spectral interpretation for the networks. Our goal is to establish a new paradigm that embeds graph-based priors into neural networks, creating solutions capable of delivering accurate and reliable 3D imaging in complex and dynamic scenarios.
3:50-4:30: Panel on Emerging Topics in GSP, Keizo Saji Memorial Hall (10th floor)
Moderator: Gene Cheung
Panelists: Antonio Ortega, Vicky Zhao, Hoi To Wai, Shunsuke Ono, Yuichi Tanaka
5:00-6:30: Poster Session, 6E/6F (6th floor), Chair: Hiroshi Higashi (Osaka University)
Generalized Sampling of Graph Signals by Difference-of-Convex Optimization: The Stochastic Prior Case: Keitaro Yamashita, Shunsuke Ono (Tokyo Institute of Technology)
Projection-free Graph-based Classifier Learning using Gershgorin Disc Perfect Alignment: Cheng Yang (Shanghai University of Electric Power), Gene Cheung (York University)
Comparative Study of Balanced Signed Graph Learning for Random Graphs: Haruki Yokota, Hiroshi Higashi, Yuichi Tanaka (Osaka University), Gene Cheung (York University)
Identifying Time-varying Directed Graph Using State-dependent Model: Yuzhe Li (Xi'an Jiaotong University), Hangjing Zhang, Zhuoshi Pan, Ji Qi (Tsinghua University), H. Vicky Zhao (Tsinghua University)
Interpretable Graph Signal Denoising Using Regularization by Denoising: Hayate KOJIMA (Tokyo University of Agriculture and Technology), Hiroshi Higashi, Yuichi Tanaka (Osaka University)
Unrolling Gradient Graph Laplacain Regularizer for Point Cloud Color Denoising: Hongtao Wang, Fei Chen (Fuzhou University)
GSP-Traffic Dataset: Graph Signal Processing Dataset Based on Traffic Simulation: Rui Kumagai (Osaka University), Hayate Kojima (Tokyo University of Agriculture and Technology), Hiroshi Higashi, Yuichi Tanaka (Osaka University)
DICTIONARY LEARNING FOR DIRECTED GRAPH SIGNALS VIA AUGMENTED GFT: Tsubasa Naito, Ryuto Ito (Niigata University), Yuichi Tanaka (Osaka University), Shogo Muramatsu (Niigata University)
Graph Fourier transform in encryption domain: Yukihiro Bandoh (Shimonoseki City University)
Optimizing k in kNN Graphs with Graph Learning Perspective: Asuka Tamaru (Tokyo University of Agriculture and Technology), Junya Hara, Hiroshi Higashi, Yuichi Tanaka (Osaka University), Antonio Ortega (University of Southern California)
Enhancing Spatio-Spectral Regularization by Graph Modeling for Hyperspectral Image Denoising: Shingo Takemoto, Shunsuke Ono (Tokyo Institute of Technology)
A DESIGN OF DENSER-GRAPH-FREQUENCY GRAPH FOURIER FRAMES FOR UNDIRECTED GRAPH SIGNAL ANALYSIS: Kaito Nitani, Seisuke Kyochi (Kogakuin University)
Graph signal estimation using a cooperative Kalman filter based on the graph filter transfer and optimal transport: Tsutahiro Fukuhara, Junya Hara, Hiroshi Higashi, Yuichi Tanaka (Osaka University)
LOSSY COMPRESSION OF WEIGHTED GRAPHS USING LINE GRAPH FILTER BANKS: Kenta Yanagiya, Junya Hara, Hiroshi Higashi, Yuichi Tanaka (Osaka University), Antonio Ortega (University of Southern California)
Frequency analysis and filter design for directed graphs with polar decomposition: Semin Kwak, Antonio Ortega (University of Southern California)
Causal-Senti-VAE: explaining sentiments analysis with keywords by learning a causal graph: Ji Qi, Zhuoshi Pan, Hangjing Zhang, Hong Vicky Zhao (Tsinghua University)
Hypergraph Transformer for Semi-Supervised Classification: Zexi Liu (Shanghai Jiao Tong University), Bohan Tang (The University of Oxford), Ziyuan Ye (The Hong Kong Polytechnic University), Xiaowen Dong (The University of Oxford), Yanfeng Wang, Siheng Chen (Shanghai Jiao Tong University,Shanghai AI Laboratory)
July 19th (Fri)
10:00-11:00: Oral Session Fri-AM, Keizo Saji Memorial Hall (10th floor), Chair: Kazuki Naganuma (Tokyo Institute of Technology)
Graph Topology Learning with Functional Priors: Hoi To Wai, Chenyue Zhang, Shangyuan Liu (The Chinese University of Hong Kong)
Sparse Graph Learning with Spectrum Prior for Deep Graph Convolutional Networks: Jin Zeng (Tongji University), Gene Cheung (York University), Wei Hu (Peking University)
Non-convex Optimization for Network Community Detection: Yu Iwai (The University of Kitakyushu), Masaaki Nagahara (Hiroshima University)
11:00-12:00: Plenary Talk #2, Keizo Saji Memorial Hall (10th floor), Chair: Yuichi Tanaka (Osaka University)
Graph Machine Learning: Past, Present, and Future: Hisashi Kashima (Kyoto University)
Abstract: Graph machine learning, despite its many commonalities with graph signal processing, has developed as a relatively independent field. This talk will trace the historical progression from graph data mining in the 1990s, through graph kernel methods in the 2000s, to graph neural networks in the 2010s, highlighting the key ideas and advancements of each era. Additionally, recent significant developments, such as the integration with causal inference, will be discussed.