Wednesday January 31, 2024 at 16:00, 617 Wachman Hall
Low rank time integrators for solving time-dependent PDEs
Jingmei Qiu, University of Delaware
I will provide overview of low rank time integrators for time dependent PDEs.These include an explicit scheme that involves a time stepping followed by a SVD truncation procedure with application to the Vlasov equations; two implicit schemes: Reduced Augmentation Implicit Low rank (RAIL) scheme and a Krylov subspace low rank scheme with applications to the heat equation and the Fokker-Planck equation; as well as implicit-explicit low rank integrators for advection-diffusion equations.Wednesday February 7, 2024 at 16:00, 617 Wachman Hall
Minimal Quantization Model for an Active System
Rodolfo Ruben Rosales, Massachusetts Institute of Technology
A small liquid drop placed above the vibrating surface of a liquid, will not (under appropriate conditions) fall and merge. In fact it will bounce from the surface, and can be made to do so for very many bounces (hundreds of thousands). If the liquid below is just under the Faraday threshold, the drop excites waves with each bounce, and via these waves it can extract momentum from the fluid underneath it, and starts moving "walking" at some preferred speed. The drop-wave system then becomes a peculiar active system, where the active elements interact with each other (if there are many drops) via waves, as well as with their own past via the waves generated earlier in their history. This system have many special properties, some reminiscent of quantum mechanics. In this talk I will focus on one such property:
If the drop is constrained to move in a bounded region by some external force (e.g.: Coriolis), then its path exhibits radial quantization: the statistics for the radius of curvature along the drop path is concentrated on a discrete set of values. The question is why? There are various models that predict this, but the question is not about the model(s), but about what is the mechanism behind the behavior. An obvious answer is that it is because the drop motion is caused via waves. This is, basically, correct; but too vague, even misleading. First of all, the drop does not move on some "external" wave field, but on a self-generated one. Second, the waves decay, hence the wave field is dominated by the waves produced in the recent path. Yet, if one discards all but the recent past, the quantization disappears --- the recent past selects the preferred speed, but does not quantize. It turns out that the effect is caused by (exponentially suppressed) waves emitted in the past at "special" regions where constructive interference magnifies their effect. As I hope to show, this gives a simple and intuitive explanation of how the radii selection occurs.Wednesday February 14, 2024 at 16:00, 617 Wachman Hall
Combining metabolic models, large data sets, and deep learning to improve systems biology simulators
Sean McQuade, Rutgers University Camden
Chemical networks, such as metabolism, can be simulated to assist in an array of research including new drug discovery, personalized medicine, and testing high-risk treatment before applying it to humans. Improved biochemical simulations can also reduce our dependence on animal testing before clinical trials. This talk demonstrates a mathematical framework for biochemical systems that was designed with two goals in mind: 1. improved early phase drug discovery and 2. personalized medicine. The talk also addresses a particular contribution that can be made by deep learning models.Wednesday February 28, 2024 at 16:00, 617 Wachman Hall
Dynamic Boundary Conditions and Motion of Grain Boundaries
Chun Liu, Illinois Institute of Technology
I will present the dynamic boundary conditions in the general
energetic variational approaches. The focus is on the coupling between
the bulk effects with the active boundary conditions.
In particular, we will study applications in the evolution of grain
boundary networks, in particular, the drag of trip junctions. This is a
joint work with Yekaterina Epshteyn (University of Utah) and Masashi
Mizuno (Nihon University).Monday March 11, 2024 at 16:00,
Randomized Numerical Linear Algebra
Erik Boman, Sandia National Laboratory
Randomization has become a popular technique in numerical linear algebra in recent years, with applications in several areas from scientific computing to machine learning. We review some problems where it works well. Sketching is a powerful way to reduce a high-dimensional problem to a lower-dimensional problem. Sketch-and-solve and sketch-and-precondition are the two main approaches for linear systems and least squares problems. Finally, we describe two recent applications in more detail: Fast and stable orthogonalization (QR on tall, skinny matrices), and spectral graph partitioning.Wednesday March 27, 2024 at 16:00, 617 Wachman Hall
Numerical Solution of Double Saddle-Point Systems
Chen Greif, University of British Columbia
Double saddle-point systems are drawing increasing attention in the past few years, due to the importance of multiphysics and other relevant applications and the challenge in developing efficient iterative numerical solvers. In this talk we describe some of the numerical properties of the matrices arising from these problems. We derive eigenvalue bounds and analyze the spectrum of preconditioned matrices, and it is shown that if Schur complements are effectively approximated, the eigenvalue structure gives rise to rapid convergence of Krylov subspace solvers. A few numerical experiments illustrate our findings.Wednesday April 3, 2024 at 16:00, 617 Wachman Hall
Decentralized Stochastic Bilevel Optimization
Hongchang Gao, Temple University
Stochastic Bilevel Optimization (SBO) has widespread applications in machine learning, such as meta learning, hyperparameter optimization, and network architecture search. To train those machine learning models on large-scale distributed data, it is necessary to develop distributed SBO algorithms. Therefore, Decentralized Stochastic Bilevel Optimization (DSBO) has been actively studied in recent years due to the efficiency and robustness of decentralized communication. However, it is challenging to estimate the stochastic hypergradient on each worker due to the loss function's bilevelstructure and decentralized communication.
In this talk, I will present our recent work on decentralized stochastic bilevel gradient descent algorithms. On the algorithmic design side, I will talk about how to estimate the hypergradient without incurring large communication overhead under both homogeneous and heterogeneous settings. On the theoretical analysis side, I will describe the convergence rate of our algorithms, showing how the communication topology, the number of workers, and heterogeneity affect the theoretical convergence rate. Finally, I will show the empirical performance of our algorithms.Wednesday April 10, 2024 at 16:00, 617 Wachman Hall
New Perspectives on Multiscale Modeling, Simulation, and Analysis of Grain Growth in Polycrystalline Materials
Yekaterina Epshteyn, University of Utah
Many technologically useful materials are polycrystals composed of small monocrystalline grains that are separated by grain boundaries of crystallites with different lattice orientations. One of the central problems in materials science is to design technologies capable of producing an arrangement of grains that delivers a desired set of material properties.
A method by which the grain structure can be engineered in polycrystalline materials is through grain growth (coarsening) of a starting structure. Grain growth in polycrystals is a very complex multiscale multiphysics process. It can be regarded as the anisotropic evolution of a large cellular network and can be described by a set of deterministic local evolution laws for the growth of individual grains combined with stochastic models for the interaction between them. In this talk, we will present new perspectives on mathematical modeling, numerical simulation, and analysis of the evolution of the grain boundary network in polycrystalline materials. Relevant recent experiments will be discussed as well.Wednesday April 17, 2024 at 16:00, 617 Wachman Hall
A Mixed Sparse-Dense BLR Solver for Electromagnetics
Francois-Henry Rouet, Ansys
Element-by-element preconditioners were an active area of research in the 80s and 90s, and they found some success for problems arising from Finite Element discretizations, in particular in structural mechanics and fluid dynamics (e.g., the "EBE" preconditioner of Hughes, Levit, and Winget). Here we consider problems arising from Boundary Element Methods, in particular the discretization of Maxwell's equations in electromagnetism. The matrix comes from a collection of elemental matrices defined over all pairs of elements in the problem and is therefore dense. Inspired by the EBE idea, we select subsets of elemental matrices to define different sparse preconditioners that we can factor with a direct method. Furthermore, the input matrix is rank-structured ("data sparse") and is compressed to accelerate the matrix-vector products. We use the Block Low-Rank approach (BLR). In the BLR approach, a given dense matrix (or submatrix, in the sparse case) is partitioned into blocks following a simple, flat tiling; off-diagonal blocks are compressed into low-rank form using a rank-revealing factorization, which reduces storage and the cost of operating with the matrix. We demonstrate results for industrial problems coming from the LS-DYNA multiphysics software.
Joint work with Cleve Ashcraft and Pierre L'EplattenierWednesday September 4, 2024 at 16:00, 617 Wachman Hall
Mathematical and computational epidemiology of antimalarial drug resistance evolution
Maciej Boni, Temple University
Wednesday September 11, 2024 at 16:00, 617 Wachman Hall
Trustworthy Machine Learning for Biomedicine
Xinghua (Mindy) Shi, Temple University
Recent biomedical data deluge has fundamentally transformed biomedical research into a data science frontier. The unprecedented accumulation of biomedical data presents a unique yet challenging opportunity to develop novel methods leveraging artificial intelligence and machine learning to further our understanding of biology and advance medicine. In this talk, I will first introduce the cutting-edge research in characterizing human genetic variation and their associations with disease. I will then present statistical and machine learning methods for robust modeling of medical data. Finally, I will overview recent development in trustworthy machine learning to combat model overfitting, privacy and biases.Wednesday September 18, 2024 at 16:00, 617 Wachman Hall
On the lack of external response of a nonlinear medium in the second-harmonic generation process.
Narek Hovsepyan, Rutgers University
Second Harmonic Generation (SHG) is a process inwhich the input wave (e.g. laser beam) interacts with a nonlinearmedium and generates a new wave, called the second harmonic, atdouble the frequency of the original input wave. Weinvestigate whether there are situations in which the generatedsecond harmonic wave does not scatter and is localizedinside the medium, i.e., the nonlinearinteraction of the medium with the probing wave isinvisible to an outside observer. This leadsto the analysis of a semilinear elliptic system formulatedinside the medium with non-standard boundary conditions. Moregenerally, we set up a mathematical framework needed to investigate amultitude of questions related to the nonlinear scatteringproblem associated with SHG (or other similar multi-frequency opticalphenomena). This is based on a joint work with Fioralba Cakoni, MattiLassas and Michael Vogelius.Wednesday September 25, 2024 at 16:00, 617 Wachman Hall
Mode switching in organisms for solving explore-versus-exploit problems
Kathleen Hoffman, University of Maryland Baltimore County
Fish use active sensing to constantly re-evaluate their position in space. The weakly electric glass knifefish, Eigenmannia virescens, incorporates an electric field as one of its active sensing mechanisms. The motion of the knifefish in a stationary refuge is captured using high-resolution motion tracking and illustrates many small amplitude oscillations inside the refuge coupled with high amplitude “jumps”. We show that this active sensing mechanism is not reflected by a Gaussian distribution of the velocities. Instead, we show that the velocities are more accurately reflected by a mixture of Gaussians because of the number of high amplitude jumps in the tails of the velocity distribution. The experimental position measurements were taken in both the light and the dark showing more frequent bursts of faster movement in the dark, where presumably the fish are relying more on their electric sensor than their vision. Computational models of active state estimation with noise injected into the system based on threshold triggers exhibit velocity distributions that resemble those of the experimental data, more so than with pure noise or zero noise inputs. Similar distributions have been observed in a variety of different senses and species.
This is joint work with Debojyoti Biswas (JHU), Noah Cowan (JHU), John Guckenheimer (Cornell), Andrew Lamperski (UMN), Yu Yang (JHU)Wednesday October 2, 2024 at 16:00, 617 Wachman Hall
The Average Rate of Convergence of the Exact Line Search Gradient Descent Method, with applications to polynomial optimization problems in data sciences
Thomas P.Y. Yu, Drexel University.
It is very well known that when the exact line search gradient descent method is applied to a convex quadratic objective, the worst-case rate of convergence (ROC), among all seed vectors, deteriorates as the condition number of the Hessian of the objective grows. By an elegant analysis due to H. Akaike, it is generally believed -- but not proved -- that in the ill-conditioned regime the ROC for almost all initial vectors, and hence also the average ROC, is close to the worst-case ROC. We complete Akaike's analysis using the theorem of center and stable manifolds. Our analysis also makes apparent the effect of an intermediate eigenvalue in the Hessian by establishing the following somewhat amusing result: In the absence of an intermediate eigenvalue, the average ROC gets arbitrarily fast -- not slow -- as the Hessian gets increasingly ill-conditioned.
This work is motivated by contemporary applications in data sciences. We shall discuss some of the surprising properties of the polynomial optimization problems involved in these applications.
Wednesday October 16, 2024 at 16:00, 617 Wachman Hall
Seminar postponed
Wednesday October 23, 2024 at 16:00, 617 Wachman Hall
Modified Patankar-Runge-Kutta Methods: Introduction, Analysis and Numerical Applications
Andreas Meister, University of Kassel
Mathematical modeling leads to so-called convection-diffusion-reaction equations in the form of systems of partial differential equations in numerous practical applications. Examples are turbulent air flows or algae growth in oceans orlakes. After discretization of the spatial derivatives, an extremely large systemof ordinary differential equations occurs. A reasonable numerical time integration scheme must reflect present properties like the positivity of single balancequantities or also the conservativity of the initial model.In the talk we will present so-called modified Patankar-Runge-Kutta (MPRK)schemes. They adapt explicit Runge-Kutta schemes in a way to ensure positivity and conservativity irrespective of the time step size. Thereby, we introducea general definition of MPRK schemes and present a thorough investigationof necessary as well as sufficient conditions to derive first, second and thirdorder accurate MPRK schemes. The theoretical results will be confirmed bynumerical experiments in which MPRK schemes are applied to solve non-stiffand stiff systems of ordinary differential equations. Furthermore, we investigatethe efficiency of MPRK schemes in the context of convection-diffusion-reactionequations with source terms of production-destruction type.Wednesday November 20, 2024 at 16:00, 617 Wachman Hall
Smart Data, Smarter Models: Enhancing the Predictive Power of Mathematical Models of Cancer
Jana Gevertz, The College of New Jersey
Mathematical models are powerful tools that can vastly improve our understanding of cancer dynamics and treatment response. However, to be useful, experimental or clinical data are necessary to both train and validate such predictive models, and not all data are created equal. Here I present two methodologies that improve upon model-informed experimental design and model-based predictions. First, I will introduce a multi-objective optimization algorithm to identify combination protocols that maximize synergy from the perspective of both efficacy and potency (toxicity), while simultaneously reconciling sometimes contradictory assessments made by different synergy metrics. Second, using the notion of parameter identifiability, I will address the question of what is the minimal amount of experimental data that needs to be collected, and when it should be collected, to have confidence in a model's predictions. Real-world applications of both methodologies will be presented.
Body