Title: How Health Opinions Evolve under News and Social Influence
Abstract: How do social interactions and news exposure shape health-related opinions and behaviors, such as decisions about vaccination? To address this question, I will present a hybrid model of opinion dynamics on directed social networks in which individuals do not update continuously but instead accumulate evidence from socially filtered information and revise their opinions only when that evidence exceeds a threshold.
This mechanism produces a rich variety of collective outcomes, including consensus, polarization and fragmentation, and these regimes can be organized in parameter space through interpretable controls of platform design, curation and user responsiveness. I will also discuss an extension that incorporates news media as a distinct external source with its own reach, attention level and receptivity. Motivated by vaccination and health information, this framework represents a step toward coupled information: epidemic models in which opinions influence behavior and epidemic prevalence feeds back into the information environment.
Title: How Health Opinions Evolve under News and Social Influence
Abstract: How do social interactions and news exposure shape health-related opinions and behaviors, such as decisions about vaccination? To address this question, I will present a hybrid model of opinion dynamics on directed social networks in which individuals do not update continuously but instead accumulate evidence from socially filtered information and revise their opinions only when that evidence exceeds a threshold.
This mechanism produces a rich variety of collective outcomes, including consensus, polarization and fragmentation, and these regimes can be organized in parameter space through interpretable controls of platform design, curation and user responsiveness. I will also discuss an extension that incorporates news media as a distinct external source with its own reach, attention level and receptivity. Motivated by vaccination and health information, this framework represents a step toward coupled information: epidemic models in which opinions influence behavior and epidemic prevalence feeds back into the information environment.
Title: On the uniqueness of network identification of Boolean networks
Abstract: Finding conditions on input data that guarantee a wiring diagram has been an open problem since the introduction of algebraic geometry techniques to Boolean network analysis. In this talk, we characterize the data sets that identify a wiring diagram before conducting experiments. We show that the geometric features of the discrete data on an n-dimensional hypercube completely determine if network inference is guaranteed to be unique. This approach also provides a heuristic to minimize the number of candidate wiring diagrams, offering concrete criteria for experiment design.
Title: On the uniqueness of network identification of Boolean networks
Abstract: Finding conditions on input data that guarantee a wiring diagram has been an open problem since the introduction of algebraic geometry techniques to Boolean network analysis. In this talk, we characterize the data sets that identify a wiring diagram before conducting experiments. We show that the geometric features of the discrete data on an n-dimensional hypercube completely determine if network inference is guaranteed to be unique. This approach also provides a heuristic to minimize the number of candidate wiring diagrams, offering concrete criteria for experiment design.
Abstract: Exploration in uncertain and changing environments poses a fundamental dilemma: While it benefits the group by generating valuable information, it imposes direct costs on individual explorers. What conditions allow such adventurous individuals to exist and persist in decentralized collectives? We address this question using a normative model of collective foraging in which agents decide whether to commit or remain idle with rewards and information shared across the group.
Using dynamic programming and asymptotic analysis, we show that optimal performance arises from structured heterogeneity: a small, heterogeneous subset of risk-taking explorers ventures even when expected returns are negative while the majority of risk-averse individuals commit collectively once conditions become favorable. This decentralized composition matches the efficiency of a centrally coordinated group through simple threshold-based rules. The degree of heterogeneity peaks at intermediate commitment cost and moderate environmental switch amplitude but diminished in rapidly changing environments.
These results reveal when and why adventurous individuals emerge, showing how collectives self-organize to balance exploration and exploitation under uncertainty.
Abstract: Exploration in uncertain and changing environments poses a fundamental dilemma: While it benefits the group by generating valuable information, it imposes direct costs on individual explorers. What conditions allow such adventurous individuals to exist and persist in decentralized collectives? We address this question using a normative model of collective foraging in which agents decide whether to commit or remain idle with rewards and information shared across the group.
Using dynamic programming and asymptotic analysis, we show that optimal performance arises from structured heterogeneity: a small, heterogeneous subset of risk-taking explorers ventures even when expected returns are negative while the majority of risk-averse individuals commit collectively once conditions become favorable. This decentralized composition matches the efficiency of a centrally coordinated group through simple threshold-based rules. The degree of heterogeneity peaks at intermediate commitment cost and moderate environmental switch amplitude but diminished in rapidly changing environments.
These results reveal when and why adventurous individuals emerge, showing how collectives self-organize to balance exploration and exploitation under uncertainty.
Title: The Representation Jensen-Shannon Divergence
Abstract: Quantifying the difference between probability distributions is a fundamental problem in machine learning. However, since the underlying distributions of the data are unknown, statistical divergences must be estimated from empirical data. In this work, we propose the representation Jensen-Shannon divergence (RJSD) divergence which avoids estimating the probability density functions by embedding the data in a reproducing kernel Hilbert space (RKHS) where data distributions are represented via uncentered covariance operators. We provide estimators based on Gram matrices and empirical covariance matrices using random Fourier features. Theoretical analysis reveals that RJSD is a lower bound on the Jensen-Shannon divergence, enabling variational estimation. Additionally, we show that RJSD is a higher-order extension of the maximum mean discrepancy (MMD), providing a more sensitive measure of distributional differences. Our experimental results demonstrate RJSD's superiority in two-sample testing, distribution shift detection, and unsupervised domain adaptation, outperforming state-of-the-art techniques. RJSD's versatility and effectiveness make it a promising tool for machine learning research and applications.
Title: The Representation Jensen-Shannon Divergence
Abstract: Quantifying the difference between probability distributions is a fundamental problem in machine learning. However, since the underlying distributions of the data are unknown, statistical divergences must be estimated from empirical data. In this work, we propose the representation Jensen-Shannon divergence (RJSD) divergence which avoids estimating the probability density functions by embedding the data in a reproducing kernel Hilbert space (RKHS) where data distributions are represented via uncentered covariance operators. We provide estimators based on Gram matrices and empirical covariance matrices using random Fourier features. Theoretical analysis reveals that RJSD is a lower bound on the Jensen-Shannon divergence, enabling variational estimation. Additionally, we show that RJSD is a higher-order extension of the maximum mean discrepancy (MMD), providing a more sensitive measure of distributional differences. Our experimental results demonstrate RJSD's superiority in two-sample testing, distribution shift detection, and unsupervised domain adaptation, outperforming state-of-the-art techniques. RJSD's versatility and effectiveness make it a promising tool for machine learning research and applications.
Title: Weighted quantization using MMD: From mean field to mean shift via gradient flows
Abstract: Approximating a probability distribution using a set of particles is a fundamental problem in machine learning and statistics, with applications including clustering and quantization. Formally, we seek a weighted mixture of Dirac measures that best approximates the target distribution. While much existing work relies on the Wasserstein distance to quantify approximation errors, maximum mean discrepancy (MMD) has received comparatively less attention, especially when allowing for variable particle weights. We argue that a Wasserstein-Fisher-Rao gradient flow is well-suited for designing quantizations optimal under MMD. We show that a system of interacting particles satisfying a set of ODEs discretizes this flow. We further derive a new fixed-point algorithm called mean shift interacting particles (MSIP). We show that MSIP extends the classical mean shift algorithm, widely used for identifying modes in kernel density estimators. Moreover, we show that MSIP can be interpreted as preconditioned gradient descent and that it acts as a relaxation of Lloyd's algorithm for clustering. Our unification of gradient flows, mean shift, and MMD-optimal quantization yields algorithms that are more robust than state-of-the-art methods, as demonstrated via high-dimensional and multi-modal numerical experiments.
Title: Weighted quantization using MMD: From mean field to mean shift via gradient flows
Abstract: Approximating a probability distribution using a set of particles is a fundamental problem in machine learning and statistics, with applications including clustering and quantization. Formally, we seek a weighted mixture of Dirac measures that best approximates the target distribution. While much existing work relies on the Wasserstein distance to quantify approximation errors, maximum mean discrepancy (MMD) has received comparatively less attention, especially when allowing for variable particle weights. We argue that a Wasserstein-Fisher-Rao gradient flow is well-suited for designing quantizations optimal under MMD. We show that a system of interacting particles satisfying a set of ODEs discretizes this flow. We further derive a new fixed-point algorithm called mean shift interacting particles (MSIP). We show that MSIP extends the classical mean shift algorithm, widely used for identifying modes in kernel density estimators. Moreover, we show that MSIP can be interpreted as preconditioned gradient descent and that it acts as a relaxation of Lloyd's algorithm for clustering. Our unification of gradient flows, mean shift, and MMD-optimal quantization yields algorithms that are more robust than state-of-the-art methods, as demonstrated via high-dimensional and multi-modal numerical experiments.