(Jump to the full list of publications: New results, peer-reviewed journals, peer-reviewed conferences, non-peer-reviewed conferences, book chapters, tutorials, preprints and technical reports.)
We are currently designing novel reinforcement-learning (RL) algorithms for continuous and high-dimensional state/action spaces.
We do not follow standard routes, but we introduce novel Bellman mappings which sample the state space on-the-fly, require no information on transition probabilities of Markov decision processes, and may operate with no training data available. In contrast to the prevailing line of research, which define Bellman mappings in \(\mathcal{L}_{\infty}\)-norm Banach spaces (no inner product available by definition), our Bellman mappings are designed specifically for reproducing kernel Hilbert spaces (RKHSs) to capitalize on their geometry and rich properties of their inner products.
See, for example, our papers in [IEEE Transactions on Signal Processing] and [arXiv].
We also introduce new ways to model Q-function losses via Gaussian-mixture-models and Riemannian optimization. Results on these novel directions will be reported at several publication venues.
See, for example, our preprint in (i) [arXiv] [TechRxiv].
We study sparse optimization, which aims to estimate solutions with sparsity—vectors whose most of their entries are zero. Such sparse representation is particularly useful when only a limited subset of data or features is important, as is the case with high-dimensional data. Applications include compressive sensing, feature selection, audio and image processing, etc.
A basic problem in sparse optimization is to estimate a sparse signal/vector \(\mathbf{x}\) from measurements modeled as \(\mathbf{y} = \mathbf{Ax} + \mathbf{n}\), where \(\mathbf{n}\) is Gaussian noise. A standard approach is to minimize a cost function composed of a quadratic loss term and a sparsity-inducing penalty term. One of the most widely used methods is LASSO, which employs the \(\mathcal{L}_1\)-norm penalty. However, the \(\mathcal{L}_1\) norm is known to cause estimation bias. To address this issue, the so-called Moreau-enhancement technique has recently received significant attention. See, for example, reference 1, reference 2, and reference 3.
To this end, we proposed a robust sparse signal recovery method by exploiting an effective way of utilizing Moreau enhancement, see [IEEE Transactions on Signal Processing]. We further introduced the new notion of “external division operator,” which extends the idea of Moreau enhancement, see [IEEE ICASSP 2024]. More results on this new direction will be reported in future publications.
We study the problem of learning from data that live in low-dimensional manifolds.
Loosely speaking, manifolds are smooth surfaces which are usually embedded in high-dimensional spaces. Manifolds provide us a structured and rigorous way to identify latent and low-dimensional data patterns and structures.
Our preferred way of learning in this theme is regression. To this end, a novel non-parametric regression framework is introduced based only on the assumption that the underlying manifold is smooth. Neither explicit knowledge of the manifold nor training data are needed to run our regression tasks. Our design can be straightforwardly extended to reap the benefits of reproducing kernel functions; a well-known machine-learning toolbox. The framework is general enough to accommodate data with missing entries.
We validate our designs via several application domains, such as dynamic-MRI and graph signal processing where imputation of data and edge-flows in graphs is required. Several generalizations and novel research directions are currently under study.
See, for example, our papers in IEEE Open Journal of Signal Processing, IEEE Transactions on Computational Imaging, and IEEE Transactions on Medical Imaging.
We study here the case of learning from data/features which live in Riemannian manifolds; a special class of manifolds endowed with an inner product and thus a distance metric.
These concepts may appear abstract, but they give us the freedom to employ our geometric intuition to address learning tasks in a wide variety of application domains. For example, numerous well-known features in signal processing and machine learning belong to Riemannian manifolds; see correlation matrices, orthogonal matrices, fixed-rank linear subspaces and tensors, probability density functions, etc.
With regards to applications, we consider the basic learning tasks of clustering and classification on data taken from network time series, and in particular, from brain networks. Several research directions are currently under study.
See, for example, our papers in IEEE Open Journal of Signal Processing and Signal Processing.