RESEARCH INTERESTS

My interests are in harmonic analysis, wavelets, multiscale analysis in general, and in particular with applications to the analysis of graphs and data sets viewed as discrete or sampled continuous geometric structures embedded in high-dimensional spaces. I am interested in machine learning problems, mostly from the point of view of approximation and fitting of functions under random noise and random sampling.

  1. Diffusion Wavelets: a construction of families of wavelets and Multi-resolution Analyses on graphs, manifolds and point clouds. Pictures, papers and presentations available.
  2. Diffusion Geometries: here are some links to the use of diffusion geometries in data analysis.
  3. Multiscale Geometric Methods for Data: various techniques for studying geometry of high-dimensional data in a multiscale fashion.
  4. Analysis of Molecular Dynamics Data: in collaboration with Cecilia Clementi, Mary Rohrdanz and Wenwei Zheng, we use the geometric structure of data generated from molecular dynamics data to construct observables that provide reaction coordinates and reduced, low-dimensional dynamics that well-approximates the long-time dynamics of the original system.
  5. Multiscale Analysis of Markov Decision Processes
  6. Visualization of large data sets.
  7. Harmonic Analysis and Wavelets: here I talk a bit about Harmonic Analysis and provide links to related web pages.
  8. HyperSpectral Imaging and Pathology: hyper-spectral imaging applied to pathology

Postdocs and students

Current: Paul Escande, Sven Cattell, Stefano Vigogna, Alessandro Lanteri, Wenjing Liao, James Murphy, Sui Tang, Ming Zhong

Past: Jake Bouvrie, Guangliang Chen (SJSU), Miles Crosskey (now at CoVar Applied Technologies), Mark Iwen (MSU), David Lawlor (now at NOKIA's HERE), Stas Minsker (USC), Jason Lee, Nate Strawn (Georgetown University), Josh Vogelstein (JHU), Yoon-Mo Jung, Prakash Balachandran, Anna V Little (Jacksonville U).

Pointers to some future, present and recent past events

Links of Interest

Programming, Code, etc...

Data Sets

Overview

We have developed several methods for analyzing in a multiscale fashion the geometry of data in high-dimensions, when the data is close to being low-dimensional.
In particular we developed a technique we called Multiscale SVD, where the main idea is to fix a point and look at the behavior of singular values of the covariance of the data in a ball of radius r centered at the point, as r varies. This behavior is useful to reveal the local intrinsic dimension of the data, is robust to noise and sampling, and also can be used to estimate the largest region around the point which is approximately low-dimensional. This is useful in a wide variety of applications. We have used it to analyze molecular dynamics data, and construct reduced models, to estimate the intrinsic dimension of a variety of data sets, and estimate K-flats approximations to data efficiently.
We have also developed a technique called Geometric Multi-Resolution Analysis, a.k.a. GMRA, which we use to construct piecewise linear multiscale approximations to data. This leads to a novel way of approximating data, of decomposing data matrices in a hierarchical fashion, of constructing dictionaries that sparsify data, and we use it as a scaffold to efficiently encode data and on which to run a variety of estimation tasks (from regression to classification to model reduction for stochastic systems) efficiently (both from a sample complexity perspective and a computational perspective).

Papers

Overview

Diffusion wavelets generalize classical wavelets, allowing for multiscale analysis on general structures, such as manifolds, graphs and point clouds in Euclidean space. They allow to perform signal processing tasks on functions on these spaces. This has several applications. The ones we are currently focusing on arise in the study of data sets which can be modeled as graphs, and one is interested in learning functions on such graphs. For example we can consider a graph whose vertices are proteins, the edges connect interacting proteins, and the function on the graph labels a functionality of the protein. Or each vertex could be an image (e.g. a handwirtten digit), the edge connect very similar images, and the function at each vertex is the value of digit represented by that vertex. In classical, one-dimensional wavelet theory one applies dilations by powers of 2 and translations by integers to a mother wavelet, and obtains orthonormal wavelet bases. The classical construction has been of course generalized in many ways, considering wide groups of transformations, spaces different from the real line, such as higher-dimensional Euclideanspaces, Lie groups, etc...Most of these constructions are based on groups of geometrical transformations of the space, that are then applied (as a "change of variable") to functions on that space to obtain wavelets. On general graphs, point clouds and manifolds there may not be nice or rich groups of transformations. So instead of assuming the existence of these "symmetries" we directly use semigroups of operators acting on functions on the space (and not on the space itself). We typically use "diffusion operators", because of their nice properties. Let T be a diffusion operator on a graph T (e.g. the heat operator). In many cases, our graph will be a discretization of a manifold. The study of the eigenfunctions and eigenvalues of T is known as Spectral Graph Theory and can be viewed (for our purposes) a generalization of the classical theory of Fourier series on the circle. The advantages of wavelets and multi-scale techniques over classical Fourier series are well known. So it is natural to attempt to generalize the wavelet theory to the setting of diffusion operators on graphs. Following Stein, we take the view that the dyadic powers of the operator T establish a scale for performing multiresolution analysis on the graph. In the paper, "Diffusion Wavelets," we introduce a procedure for construction scaling functions and wavelets adapted to these scales.

Index

Papers

Sketch of the Construction

In several applications there is a need to organize structures in a multi-resolution fashion, in order to process them, understand them, compress them etc...Maybe the most classical example is given by Fourier analysis, where different resolutions correspond to different frequency bands in which a signal can be analyzed. However, wavelets allow one to perform a multi-resolution analysis in a somehow stronger and better organized way in the spatial domain. The construction of wavelets in Euclidean spaces is by now in many ways quite well understood, even if interesting open questions remain for higher-dimensional wavelet constructions. Wavelets have found also wide applications in numerical analysis, both as mathematical foundation of Fast Multipole Methods, and by providing bases for Galerkin methods. Already in this latter setting, though, higher flexibility has been required than low-dimensional Euclidean wavelets, since multi-resolution are needed on rather general domains and manifolds. Both the Fast Multipole Methods and the Galerkin wavelet methods are not very well adapted to the operator, and only through some efforts are they adapted to the geometry of the domain/manifold. Finally, we want to mention the setting of Spectral Graph Theory, where one can naturally do Fourier analysis through the Laplacian of a graph, and several multi-scale constructions, for use in a variety of applications, are still quite ad hoc. In the paper “Diffusion Wavelets” we propose a construction of wavelets on discrete (or discretized continuous) graphs and spaces, that are adapted to the “geometry” of a given diffusion operator T, where the attribute “diffusion” is intended in a rather general sense. The motivation for starting with a given diffusion operator is that in many cases one is interested in studying functions on the graph/space, and hence it seems natural to start with a local operator generating local relationships between functions. Powers of the operator propagate these relationships further away till they become global. If the spectrum of T decays, large powers have low-rank, and hence are compressible. For example we can think of the range of high power of T as being essentially spanned by very smooth functions with small gradient, or even band-limited functions. It is natural to take advantage of this, and compress the ranks of (dyadic) powers of the operator, thus obtaining a decreasing chain of subspaces, which can be interpreted as scaling function approximation spaces. The difference between these subspace can be called wavelet subspaces. The construction of the basis elements is non-trivial: we show one can build orthonormal bases of scaling functions and wavelets, with good localization properties in about order n(log n)^2, even if the constants are still big and their improvement is of great practical significance and is being investigated. The algorithm proceeds by applying T^(2^j), expressed on the basis of orthonormal scaling functions spanning V_j, then orthogonalizing the resulting set of vectors, discarding the ones not needed to span (numerically) the same subspace, and so on. Hence the orthonormalization step encapsulate the downsampling step. Our construction works on a Riemannian manifold, with respect to, for example, the Laplace-Beltrami diffusion on the manifold; on a weighted graph, with respect, for example, to the natural diffusion induced by the weights on graph.

Examples

Homogeneous diffusion on a circle

The first example is a standard diffusion on the circle: our set is the circle sampled at 256 points, an initial orthonormal basis is given by the set of 256 delta-functions at each points, and the diffusion is given by the standard homogeneous diffusion on the circle. In the picture, we plot several diffusion scaling functions in various scaling function spaces. The finest scaling functions are the given delta-functions. These diffuse in 'triangle functions' (linear splines): these are orthonormalized into a (non-translation invariant) basis of 'triangle functions' and linear combinations of 'triangle functions'. Those are diffuse again twice etc.. The coarse scaling function spaces are spanned by the first few top eigenfunctions of the diffusion operator, which are simply trigonometric polynomials with small frequency. The algorithm still tries to build well-localized scaling functions out these trigonometric polynomials (see e.g. V_9, V_10, V_11) when possible. On the left we plot the compressed matrices representing powers of the diffusion operator, on the right we plot the entries of the same matrices which are above precision. The initial powers get fuller because the spectrum of the diffusion is slowly decaying. It is enough to consider, instead of the given diffusion, a small power of it to avoid this partial fill-up.

Non-homogeneous diffusion on the circle

We consider a circle as before, but now the diffusion operator is not translation invariant or homogeneous: the conductivity is non-constant and is depicted in the figure in the top-left position. Think of the circle of different materials, which is most conductive at the top and least conductive (almost insulating material) at the bottom. In the top right picture we represent some scaling functions at level 15, so by construction/definition, these scaling functions span the range of T^(2^16-1). Observe that the “scale” of these scaling functions is far from uniform if we measure it in a translation invariant way (for example with standard wavelets on the circle, or with the diffusion wavelets of the previous example). However this is exactly the scale at which the corresponding power of T is operating: at the top of the circle the diffusion acted fast , the numerical rank of the operator restricted to that region is very small, and the scaling functions are trivial there; on the bottom part of the circle, this power of T is still far from trivial, since there the diffusion is much slower, and scaling functions exhibit a more complex behavior (they still contain local high-frequency components). In the second row we show how one can compute an eigenfunction of the compressed operator and extend it back to the whole space, with good precision. In the third row we show the entries above precision of a power of the operator (left); on the right we show a “diffusion scaling function” embedding, which shows that points at the bottom of the circle have a large “diffusion distance” (it is hard/takes a long time to diffuse from one to the other) while points at the top have a small diffusion distance. Diffusion distance in original space is roughly Euclidean distance in this “diffusion scaling function” embedding.

Diffusion on a graph of 3 Gaussian clouds

In this example we consider the graph induced by points randomly distributed according to three Guassian random variables centered at different points. A graph is associated to these points, by putting edges between close-by points, with weights which are an exponentially decays function of the distances between the points. The picture top-left shows the points, the picture top center shows the image of the points under the Laplacian eigenfunction map, the remaining figures show some diffusion scaling functions and wavelets, hinting at how they could be used to localized the analysis of this graph.

Diffusion a dumbbell-shaped manifold

We consider a dumbbell-shaped manifold, with diffusion given by (a discrete approximation to) the Laplace-Beltrami operator. The picture shows different scaling functions and wavelets at different scales, on this manifold.

Extension outside the set

We also propose an algorithm for extending scaling functions and wavelets from the data set. In the figure we show the extension of some scaling functions from the set (left) to many points randomly generated around the set.

Diffusion Wavelet Packets

Diffusion Wavelet Packets can be constructed by further splitting the wavelet subspace further, as in the classical case.

Local Discriminant Bases with Diffusion Wavelet Packets

We consider two classes of functions on the sphere, and the problem of learning a good set of features such that the projection on them allows for good discrimination between the functions of the two classes. Functions of class A are build as the superposition of three ripply functions, with equi-oriented ripples of the same frequency, centered around three points moving slightly in a noisy way, and two Guassian bumps acting as decoys, which are non-overlapping with the three ripply functions. Functions of class B are similar but one (randomly chosen) ripply function has ripples in a different direction than the other two. Running CART on the original data has an error of 48%, running it on the top 40 LDB features leads to an error of about 12.5%, and running it onto the top discriminating eigenfunctions leads to an error of about 18%. As a second example of local discriminant basis, we consider the following two classes of functions. We fix a direction v, and slightly perturb it with random Gaussian noise. Around that direction we create a ripply spherical cap, with one or two oscillation depending on the class. We add then 5 non-overlapping ripply functions, acting as decoy, randomly on the sphere. CART run on the original data has an error of about 17.5%, CART on the top 20 LDB features has an error of about 3.5%, and CART on the first 300 eigenfunctions has an error of about 31%.

Overview

Diffusion geometries refer to the large-scale geometry of a manifold or a graph representing a data set, which is determined by long-time heat flows on the manifold/graph/data set. A multiscale analysis, that includes all scales and not just large scales, is possible with diffusion wavelets.

This behavior can be accurately described by the bottom eigenfunctions of the Laplacian operator on the manifold or on the graph (or data set). Moreover, these eigenfunctions can be used to define an embedding (sometimes called eigenmap) of the manifold or graph into low-dimensional Euclidean space, in such a way that Euclidean distances in the range of the eigenmap correspond to "diffusion distances"; on the manifold or graph. Techniques based on eigenvectors of similarity matrices (which are related to the Laplacian have been used successfully for some time in several problems in data analysis, segmentation, clustering, etc... These connections, and connection to differential-geometric structures of manifolds, and problems related to the stability, computability, and extendibility of these eigenfunctions has been explored extensively in Stephane Lafon's thesis. Further material, including talks and papers, are available on Stephane Lafon's web-page.

Diffusion Geometry was introduced with R. Coifman and Stephane Lafon (now at Google - check his home page for cool demos introducing the basics of diffusion geometries!). There a few key ideas behind the introduction of Diffusion Geometry.
  • The first key idea is that for high-dimensional data large distances are often not reliable, as they are severaly corrupted by noise, and if data has low-dimensional geometry, large distances do not respect such geometry. Therefore one starts by connecting each data point only with the closest points - those close enough for which the distance may be trusted. This leads to constructing a graph where each point is connected to its nearest point, forming a graph, possibly weighted in such a way that closest points are connected by an edge with ggreater weight.
  • The second key idea is that even if the data lies on a low-dimensional manifold, the geodesic (i.e. shortest path) distance on the manifold is not necessarily an effective distance, as it may be too easily be corrupted by noise. In diffusion geometry the geodesic distance is replaced by diffusion distance, which is in fact a family of distances, paramtrized by time. Diffusion distance between two points at time t takes into considerations all paths of length up to t in order evaluate a distance between the two points, weighting each path by the probability of realizing that path according to the random walk on the graph of nearest points previously constructed.
  • The third idea is that one may create a map from high-dimensional space to low-dimensional sapce, that respects diffusion distance, by using the top eigenvectors of the random walk on the graph. This is essentially kernel PCA, with the kernel being the random walk on the graph - in particular the kernel is data-adaptive.
  • The fourth idea is that this constrution may be multiscaled. With diffusion wavelets, this idea is carried further, to consider multiple scales on the graph in a multi-resolution fashion.
Diffusion Geometry has been extended in many directions. One direction has been to the study and model reduction of hihg-dimensional stochastic systems. For example:
  • Model reduction in molecular dynamics
  • Dimitris Giannakis and his collaboratrs has extended the construction of diffusion geometry for trajectories of dynamical systems by into account velocities, and used this construction to perform dimension reduction of complex high-dimensional systems.
  • the study of paths of activity in gene networks, in DNA sequencing, in hyperspectral imaginge, and much more.
    See here the list of works citing Diffusion Geometry

Papers

Overview

This is an ongoing collaboration with Cecilia Clementi and various people at her lab at Rice. See our recent publications:

See also the snippet at the Institute of Pure and Applied Mathematics, where this research initiated, here

The main ideas are that data from molecular dynamics simulations, e.g. in the form of the coordinates of the atoms in a molecule as a function of time, lie on or near an intrinsically-low-dimensional set in the high-dimensional state space of the molecule, and geometric properties of such sets provide important information about the dynamics, or about how to build low-dimensional representations of such dynamics. We apply recent work on estimation of the intrinsic dimension of data sets in high-dimensions to such data, validating the hypothesis that indeed the set of configurations of the molecule does indeed lie on an intrinsically low-dimensional set (at least, in the examples considered), and then use this information, together with a notion of local scale (roughly defined as the largest scale where the data is well-approximated by a low-dimensional linear subspace), to introduce a variation of diffusion maps, that leads to a set of nonlinear coordinates in state space onto which we may project the dynamics and construct a low-dimensional diffusion process that well-approximates the large-time behavior of the molecular dynamics simulation.

See this nugget and the papers above for more information.

The spectrum of the empirical approximations to the Fokker-Planck operator generating the dynamics, in the case of the two systems in the papers above. A spectral gaps indicates few slow modes.
(3) Multiscale singular values of local covariance matrices in different region of state space: the identify low-dimensiona linear planes approximating the molecular dynamics trajectories, and a local scale for such approximation (top: near a transition state, bottom: near a metastable state). (4) The intrinsic dimension and scale change with the region of state space considered: Local scale (top) and corresponding intrinsic dimension (bottom) varying in state space.

Links to pages of interest

CECAM workshops

Contents

I recently started working with Sridhar Mahadevan on Markov Decision Processes. The tools of harmonic analysis on graphs, especially the eigenfunctions of the Laplacian and Diffusion Wavelets seem to find very natural applications to the study of Markov Decision Processes. First of all they can efficiently encode functions on general state spaces for these processes, thus performing a dimensionality reduction task which is recognized as fundamental in order to be able to perform efficient computation of these processes. Secondly there many connections between Markov Decision Processes and harmonic analysis of Markov Chains, stemming from the connections between the Bellman equation and the Green's function or fundamental matrix of a Markov Chain. Nonlinearity of the optimization problem and stochasticity of most ingredients in a Markov Decision Processes on the other hand offer new challanges. I will add more to this page as we make progress in this program.



Publications

The most recent work on multiscale/hierarchical representations for MDP's is this paper: Multiscale Markov Decision Problems: Compression, Solution, and Transfer Learning, J. Bouvrie, M. Maggioni. A short version is available here: Efficient Solution of Markov Decision Problems with Multiscale Representations, J. Bouvrie and M. Maggioni, Proc. 50th Annual Allerton Conference on Communication, Control, and Computing, 2012.

In these works we introduce an automatic multiscale decomposition of MDP's, which leads to a hierarchical set of MDP's ``at different scales'' and corresponding to different portions of state space, separated by ``geometric'' and ``task-specific'' bottlenecks. The hierarchy of nested subproblems is such that each subproblem is itself a general type of MDP, that may be solved independently of the others (thereby giving trivial parallelization), and the solutions may then be stitched together through a certain type of ``boundary-conditions''. These boundary conditions are propagated top to bottom, by solving the coarsest problem(s) first, and propagating down the solutions (essentially, these boundary conditions for finer-scale problems), and ``filling-in'' these coarse solutions by solving the finer problems with the propagated boundary conditions. Besides giving fast parallelizable algorithms for the solution of large MDP's that are amenable to this decomposition, these decompositions also allow one to perform transfer learning of sub-problems (possibly at different scales), thereby enhancing the possibilities of transferability and power of transfer learning.

Here are electronic copies of two Technical Reports written with my collaborator Sridhar Mahadevan at the Computer Science Dept. at University of Mass. at Amherst about the application of the eigenfunctions of the Laplacian and Diffusion Wavelets to the solution of Markov Decision Processes:

Sridhar and myself organized a workshop on application of spectral methods to Markov Decision Processes, check out the page on our ICML '06 Workshop to be held during ICML 2006.


Proto-value Functions: A Laplacian Framework for Learning Representation and Control in Markov Decision Processes , with S. Mahadevan. Tech Report Univ. Mass. Amherst, CMPSCI 2006-35, May 2005.
This paper introduces a novel paradigmfor solvingMarkov decision processes (MDPs), based on jointly learning representations and optimal policies. Proto-value functions are geometrically customized task-independent basis functions forming the building blocks of all value functions on a given state space graph or manifold. In this first of two papers, proto-value functions are constructed using the eigenfunctions of the (graph or manifold) Laplacian, which can be viewed as undertaking a Fourier analysis on the state space graph. The companion paper (Maggioni and Mahadevan, 2006) investigates building proto-value functions using a multiresolution manifold analysis framework called diffusion wavelets, which is an extension of classical wavelet representations to graphs and manifolds. Proto-value functions combine insights from spectral graph theory, harmonic analysis, and Riemannian manifolds. A novel variant of approximate policy iteration, called representation policy iteration, is described, which combines learning representations and approximately optimal policies. Two strategies for scaling proto-value functions to continuous or large discrete MDPs are described. For continuous domains, the Nystr®om extension is used to interpolate Laplacian eigenfunctions to novel states. To handle large structured domains, a hierarchical framework is presented that compactly represents proto-value functions as tensor products of simpler proto-value functions on component subgraphs. A variety of experiments are reported, including perturbation analysis to evaluate parameter sensitivity, and detailed comparisons of proto-value functions with traditional parametric function approximators.


A Multiscale Framework for Markov Decision Processes using Diffusion Wavelets , with S. Mahadevan. Tech Report Univ. Mass. Amherst, CMPSCI 2006-36.
We present a novel hierarchical framework for solving Markov decision processes (MDPs) using a multiscale method called diffusion wavelets. Diffusion wavelet bases significantly differ from the Laplacian eigenfunctions studied in the companion paper (Mahadevan and Maggioni, 2006): the basis functions have compact support, and are inherently multi-scale both spectrally and spatially, and capture localized geometric features of the state space, and of functions on it, at different granularities in space- frequency. Classes of (value) functions that can be compactly represented in diffusion wavelets include piecewise smooth functions. Diffusion wavelets also provide a novel approach to approximate powers of transition matrices. Policy evaluation is usually the expensive step in policy iteration, requiring O(|S|^3) time to directly solve the Bellman equation (where |S| is the number of states for discrete state spaces or sample size in continuous spaces). Diffusion wavelets compactly represent powers of transition matri- ces, yielding a direct policy evaluation method requiring only O(|S|) complexity in many cases, which is remarkable because the Greenís function (I - P^\pi)-1 is usually a full matrix requiring quadratic space just to store each entry. A range of illustrative exam- ples and experiments, from simple discrete MDPs to classic continuous benchmark tasks like inverted pendulum and mountain car, are used to evaluate the proposed framework.


Value Function Approximation with Diffusion Wavelets and Laplacian Eigenfunctions , with S. Mahadevan. Tech Report Univ. Mass. Amherst, CMPSCI 05-38, May 2005.
Analysis of functions of manifolds and graphs is essential in many tasks, such as learning, classification, clustering, and reinforcement learning. The construction of efficient decompositions of functions has till now been quite problematic, and restricted to few choices, such as the eigenfunctions of the Laplacian on a manifold or graph, which has found interesting applications. In this paper we propose a novel paradigm for analysis on manifolds and graphs, based on the recently constructed diffusion wavelets. They allow a coherent and effective multiscale analysis of the space and of functions on the space, and are a promising new tool in classification and learning tasks, in reinforcement learning, among others. In this paper we overview the main motivations behind their introduction, their properties, and sketch a series of applications, among which multiscale document corpora analysis, structural nonlinear denoising of data sets and the tasks of value function approximation and policy evaluation in reinforcement learning, analyzed in two companion papers. The final form of this paper has appeared in the Proc. NIPS 2005, with Sridhar Mahadevan, Tech Report Univ. Mass. Amherst, CMPSCI 05-39, May 2005.


Fast Direct Policy Evaluation using Multiscale Analysis of Markov Diffusion Processes , with Sridhar Mahadevan, accepted to ICML 2006.
Policy evaluation is a critical step in the approximate solution of large Markov decision processes (MDPs), typically requiring $O(|S|^3)$ to directly solve the Bellman system of $|S|$ linear equations (where $|S|$ is the state space size). In this paper we apply a recently introduced multiscale framework for analysis on graphs to design a faster algorithm for policy evaluation. For a fixed policy $\pi$, this framework efficiently constructs a multiscale decomposition of the random walk $P^\pi$ associated with the policy $\pi$. This enables efficiently computing medium and long term state distributions, approximation of value functions, and the {\it direct} computation of the potential operator $(I-\gamma P^\pi)^{-1}$ needed to solve Bellman's equation. We show that even a preliminary non-optimized version of the solver competes with highly optimized iterative techniques, requiring in many cases a complexity of $O(|S| \log^2 |S|)$.

Learning Representation and Control in Continuous Markov Decision Processes , with Sridhar Mahadevan, Kimberly Ferguson, Sarah Osentoski, accepted to AAAI 2006. This paper presents a novel framework for simultaneously learning representation and control in continuous Markov decision processes. Our approach builds on the framework of proto-value functions, in which the underlying representation or basis functions are automatically derived from a spectral analysis of the state space manifold. The proto-value functions correspond to the eigenfunctions of the graph Laplacian. We describe an approach to extend the eigenfunctions to novel states using the Nystr¨om extension. A least-squares policy iterationmethod is used to learn the control policy, where the underlying subspace for approximating the value function is spanned by the learned proto-value functions. A detailed set of experiments is presented using classic benchmark tasks, including the inverted pendulum and the mountain car, showing the sensitivity in performance to various parameters, and including comparisons with a parametric radial basis function method





Links

Here are some links to useful pages:



Links

2006 NSF Workshop and Outreach Tutorials on Approximate Dynamic Programming. Check out tutorials and workshop presentations.

People

Links to some people in the field (this is very very partial list and under continuous construction!):

Overview

This is an ongoing collaboration with Rachael Brady and Eric Monson, in the Scientific Visualization group at Duke.

Here is a list of posters and papers:

See Eric's page on the FODAVA activites, which contains more materials and software.

Overview

With light sources of increasingly broader ranges, spectral analysis of tissue sections has evolved from 2 wavelength image subtraction techniques to hyperspectral mapping. A variety of proprietary spectral splitting devices, including prisms and mirror, interferometer, variable interference filter-based monochromometer, tuned liquid crystals, mounted on microscopes in combination with CCD cameras and computers, have been used to discriminate among cell typeS, endogenous, exogenous pigments

Goals We use a prototype unique tuned light source, a digital mirror array device (Plain Sight Systems) based on micro-optoelectromechanical systems, in combination with analytic algorithms developed in the Yale Program in Applied Mathematics to evaluate the diagnostic efficiency of hyperspectral microscopic analysis of normal NAD neoplastic colon biopsies prepared as microarray tissue sections. We compare the results to our previous spectral analysis of colon tissues and to other spectral studies of tissues and cells.

Experimental Details
Platform: The prototype tuned light digital mirror array device transilluminates H & E stained micro-array tissue sections with any combination of light frequencies, range 440 nm – 700 nm, through a Nikon Biophot microscope. Hyper-spectral tissue images, multiplexed with a CCD camera (Sensovation), are captured & analyzed mathematically with a PC.

Image Source: 147 (76 normal & 71 malignant) hyperspectral gray scale 400X images are derived from 68 normal and 62 malignant colon biopsies selected from @ 200 normal and @600 malignant H & E stained biopsies arrayed respectively on two different slides.

Cube: Each hyperspectral image is a 3-D data cube (Figure 3) with spatial coordinates x - 491 pixels, y - 653 pixels & spectral coordinates z - 128 pixels, a total of 41 million transmitted spectra.


 

Data

Figure 2: Microarraybiopsies

Figure 3: Hyperspectral data cube (image from DataFusion Corp)

 

Average nucleus spectrum with standard deviation bars.

A spectral slice of a normal gland

A spectral slice of a cancerous gland

TISSUE CLASSIFICATION

Tissue classification algorithm on a sample

Tissue classification algorithm on a sample



ALGORITHM FOR NORMAL/ABNORMAL DISCRIMINATION

Normal:
GREEN – true negative (normal classified as normal);
BLUE – indeterminate
RED – false positive (normal classified as abnormal)

Abnormal:
GREEN – false negative (abnormal classified as normal)
BLUE – indeterminate
RED – true positive (abnormal classified as abnormal)



Table 1: Performance of classifier on nuclei patches, cross-validated .

Patches/Nuclei (8688)

True Positive (4860)

True Negative (3828)

Predicted Positive ( malignant)

94.0% (4568)

7.3% (280)

Predicted Negative normal)

6.0% (292)

92.7% (3548)

 

CLASSIFICATION OF A WHOLE SLIDE

The classification of a whole slide is obtained by selecting 40 random nuclei patches from the slide, and averaging the corresponding classifications. The classification of the slides has no mistakes, since the few errors of classification on the nuclei are averaged out on the whole slide.


A SPATIAL-SPECTRAL CLASSIFIER

Papers

Introduction to Statistical Learning, Data Analysis and Signal Processing - Spring 2017

Course number: AS.110.446, EN.550.416.
A synopsis is available here. A syllabus is available here.
Office hours: Krieger 405, Wednesday 4:15-5:15pm (instructor); Whitehead 212, Thursdays 4:30-6:30pm (Zachary Lubberts, TA).
Mid term exam: Wednesday 3/15. Extra office hours on Tuesday 2-4pm (Shangsi Wang) and 4pm-5pm (Zachary Lubberts).
This wiki contains materials for the lectures, links to reference and references and data sets.

If you are unable to connect to the wiki, it is most likely because of one of the recurrent JHU network outages in my office, which makes my servers unavailable. I will transition the materials to my home server when I have time. You may try this alternative link (JHU local network only).

Synopsis of course content. Introduction to high dimensional data sets: key problems in statistical and machine learning. Geometric aspects. Principal component analysis, linear dimension reduction, random projections. Concentration phenomena: examples and basic inequalities. Metric spaces and embeddings thereof. Kernel methods. Nonlinear dimension reduction, manifold models. Regression. Vector spaces of functions, linear operators, projections. Orthonormal bases; Fourier and wavelet bases, and their use in signal processing and time series analysis. Basic approximation theory. Linear models, least squares. Bias and variance tradeoffs, regularization. Sparsity and compressed sensing. Multiscale methods. Graphs and networks. Random walks on graphs, diffusions, page rank. Block models. Spectral clustering, classification, semi-supervised learning. Algorithmic and computational aspects of the above will be consistently in focus, as will be computational experiments on synthetic and real data.

Prerequisites. Linear algebra will be used throughout the course, as will multivariable calculus and basic probability (discrete random variables). Basic experience in programming in C or MATLAB or R or Octave. Recommended: More than basic programming experience in Matlab or R; some more advanced probability (e.g. continuous random variables), some signal processing (e.g. Fourier transform, discrete and continuous).

Grading. Grade to be based on weekly assignments, midterm and final exams, and a final team project. The final team project includes a report and the presentation of a poster on the topic of the project (typically involving studying a small number of papers and summarizing them and/or working on a data set). Weekly problem sets will focus on computational projects and theory.

Students from all areas of science, mathematics and applied mathematics, engineering, computer science, statistics, quantitative biology, economics that need advanced level skills in solving problems related to the analysis of data, signal processing, or statistical modeling are encouraged to enroll.

The Instructor is available during his office hours, in Krieger 405, on Tuesdays from 2pm to 3pm. Our Teaching Assistant, Mr. Zachary Lubberts, is available during his office hours, in Whitehead 212 on Thursdays from 4:30-6:30pm The Math help room, in Krieger 213 I believe, is staffed daily, roughly from 9am-9pm. I found a schedule for last semester at http://www.math.jhu.edu/helproomschedule.pdf: there you may ask questions there about any math topic related to the course.

Homework sets:

Math 431 - Advanced Calculus, I - Spring 2016

Office hours: TBA, Gross Hall

This course will develop a rigorous theory of elementary mathematical analysis including differentiation, integration, and convergence of sequences and series. Students will learn how to write mathematical proofs, how to construct counterexamples, and how to think clearly and logically. These topics are part of the foundation of all of mathematical analysis and applied mathematics, geometry, ordinary and partial differential equations, probability, and stochastic analysis.

Textbook: Fundamental Ideas of Analysis, by Michael Reed. The course will cover most, but not all, of the material in Chapters 1-6.

Full Synopsis

Homework sets:

Math 561 - Scientific Computing, I - Fall 2015

Office hours: Mondays at 2:45 in Gross Hall 318

The first part of the course will cover basic numerical linear algebra, in particular matrix factorizations, solution of linear systems and eigenproblems. The suggested textbook is Trefethen and Bau's Numerical Linear Algebra. The textbook by Heath, Scientific Computing, may be a useful references for exercises, of which it contains many. The second part of the course will cover basic nonlinear optimization (gradient descent, Newton's method, stochastic gradient descent), and basic notions in linear programming. The third part of the course will cover the basics of Montecarlo algorithms. A short syllabus is available here, and an extended one is here.

One midterm exam. Final exam will include a take-home portion (algorithms, coding) and an in-class portion.

Homework: weekly homework.

Homework:

The code we discussed in our last class on 11/23, that includes solving a discretized linear PDE in various ways, as well as computing eigenvalues/eigenvectors is available here

Useful links:

Sample code I used in the first class to produce some examples

John Trangenstein's home page contains a link to his online book on Scientific Computing, as well as several useful links to programming guides for Fortrain, C, C++ and Lapack on his page for Math 225. William Allard's home page also contains useful material, such as notes and links to online guides and materials.

Fortran tutorial: here and here.

Matlab tutorial: from Mathworks here, from the University of Florida here. Many more are available online, just use your favourite search engine to look for "matlab tutorial".

Cleve Moler book (selected chapters): Numerical computing with MATLAB and Experiments with MATLAB.

Math 690 - Topics in statistical learning theory - Spring 2015

Class: MW 10:05-11:20, Location: Physics 119

Math 561 - Scientific Computing, I - Fall 2014

Class: MW 1:25-2:40, Location: Gross 304B

Office hours: Tue 12:00-1:00pm, Location: Gross 309

Here is the synopsis and its extended version in the syllabus.

Review on 12/10 at noon in 304B

Take home exam: I will e-mail you a link to the exam on the night of Friday Dec. 5th. You pick a 6 hour period between then and the final in-class exam, and at the beginning of the period of your choice you download the exam and e-mail me an electronic copy by the end of the 6 hour period. A (matching) paper copy is due at the end of in-class exam. You can use the textbook (no other books), your notes from class, and your homework. You cannot copy code directly or re-use code from your homework. Needless to say, do not discuss the problems with anybody else.

The first part of the course will cover basic numerical linear algebra, in particular matrix factorizations, solution of linear systems and eigenproblems.

The suggested textbook is Trefethen and Bau's Numerical Linear Algebra. The textbook by Heath, Scientific Computing, may be a useful references for exercises, of which it contains many.

Useful links:

John Trangenstein's home page contains a link to his online book on Scientific Computing, as well as several useful links to programming guides for Fortrain, C, C++ and Lapack on his page for Math 225. William Allard's home page also contains useful material, such as notes and links to online guides and materials.

Fortran tutorial: here and here.

Matlab tutorial: from Mathworks here, from the University of Florida here. Many more are available online, just use your favourite search engine to look for "matlab tutorial".

Cleve Moler book (selected chapters): Numerical computing with MATLAB and Experiments with MATLAB.

Math 431 - Advanced Calculus, I - Spring 2014

Office hours: Mondays, 1:30-2:30pm, Gross Hall, Office 318

This course will develop a rigorous theory of elementary mathematical analysis including differentiation, integration, and convergence of sequences and series. Students will learn how to write mathematical proofs, how to construct counterexamples, and how to think clearly and logically. These topics are part of the foundation of all of mathematical analysis and applied mathematics, geometry, ordinary and partial differential equations, probability, and stochastic analysis.

Textbook: Fundamental Ideas of Analysis, by Michael Reed. The course will cover most, but not all, of the material in Chapters 1-6.

Full Synopsis

Midterms: February 19th, and March 26th.

Homework:

Math 790 - Random Matrices, Concentration Inequalities and Applications to Signal Processing and Machine Learning - Spring 2013

We discuss several related topics and techniques at the intersection between probability, approximation theory, high-dimensional geometry, and machine learning and statistics. Synopsis. Wiki page

Math 224 - Scientific Computing - Fall 2011

Office hours: Tue 4-5pm, my office is Room 293 in the Physics Bldg.

Here is the synopsis.

The first part of the course will cover basic numerical linear algebra, in particular matrix factorizations, solution of linear systems and eigenproblems, nonlinear equations in 1 dimensions. If time permits, we shall discuss recent randomized algorithms in numerical linear algebra.

Useful links:

John Trangenstein's home page contains a link to his online book on Scientific Computing, as well as several useful links to programming guides for Fortrain, C, C++ and Lapack on his page for Math 225. William Allard's home page also contains useful material, such as notes and links to online guides and materials.

Fortran tutorial: here and here.

Matlab tutorial: from Mathworks here, from the University of Florida here. Many more are available online, just use your favourite search engine to look for "matlab tutorial".

Cleve Moler book (selected chapters): Numerical computing with MATLAB and Experiments with MATLAB.

Math 288 - Topics in Probability: Geometry, Functions and Learning in High Dimensions - Fall 2011

Here is a flier for the course.

We discuss several related topics and techniques at the intersection between probability, approximation theory, high-dimensional geometry, and machine learning and statistics. We build on basic tools in large deviation theory and concentration of measure and move to problems in non-asymptotic random matrix theory (RMT), such estimating the spectral properties of certain classes of random matrices. We then use these tools to study metric proeprties of certain maps between linear spaces that are near-isometry, such as random projections. We then move to the setting of general metric spaces, and introduce multiscale approximation of metric spaces a la Bourgain, and also discuss tree approximations, and hint at the algorithmic applications of these ideas. Finally we move to the real of function approximation/estimation/machine learning for functions defined on high-dimensional spaces. We discuss Reproducing Kernel Hilbert Spaces and learning iwth RKHS's, and we also discuss multiscale techniques for function approximation in high-dimensions. We discuss also geometric methods, both graph based (Laplacians, manifold learning) and multiscale-based. Finally, we discuss recent fast randomized algorithmic for certain numerical linear algebra computations, that use non-asymptotic RMT results discussed above.

Requirements: solid linear algebra and basic probability. Of help, but to be introduced in the course: metric spaces, function spaces, matrix factorizations.

A course wiki contains links to lecture notes, papers and other materials. May be edited by students in the class.

Math 139 - Real Analysis - Fall 2010

Office hours: Mon, 1:30pm-3:30pm, or by appointment

Textbook: Fundamental Ideas of Analysis, by Michael Reed. The course will cover most, but not all, of the material in Chapters 1-6.

There will be a midterm exam, a final exam and weekly homework.

Evaluation: There will also be at least one lengthy assignment which challenges you to write carefully constructed proofs. Your final letter grade will be based on these components weighted as follows: long assignment(s) 10-15%, regular homework 20-25%, midterm exam 25%, final exam 40%. Homework is due at the beginning of class, stapled, written legibly, on one side of each page only and must contain the reaffirmation of the Duke community standard. Otherwise, it will be returned ungraded. The logic of a proof must be completely clear and all steps justified. The clarity and completeness of your arguments will count as much as their correctness. Some problems from the homework will reappear on exams. I will go over in detail the solution to any homework problem during office hours. You may use a computational aid for the homework but I do not recommend it. Calculators and computers will not be allowed on the quizzes and exams. The lowest homework score will be dropped. No late homework will be accepted. Duke policies apply with no exceptions to cases of incapacitating short-term illness, or for officially recognized religious holiday.

You may, and are encouraged to, discuss issues raised by the class or the homework problems with your fellow students and both offer and receive advice. However all submitted homework must be written up individually without consulting anyone else's written solution.

Math 338 - Topics in Graph Theory, Random Matrices, and Applications - Spring 2010

A flier for the course, with summary of some of the topics.

Math 348 - Applied Harmonic and Multiscale Analysis - Fall 2009

Office hours: by appointment.

Students have access to a wiki and blog with materials for the course and to discuss topics and problems

The overarching theme is applied multiscale harmonic analysis, in various of its forms. Very rough outline (probably over-ambitious): - Basic Fourier analysis; Littlewood Paley theory (a.k.a. how to do multiscale analysis with Fourier); square functions & Khintchine inequality; applications to signal processing (audio/video) - Classical multiresolution analysis through wavelets; applications to Calder\'on-Zygmund integral operators and associated numerical algorithms; applications to signal processing (audio/video) - Multiscale analysis of random walks on graphs; applications to analysis of high-dimensional data sets, regression and inference. - A sketch of some techniques in multiscale geometric measure theory, in particular the geometric version of square functions + concentration phenomena for measures in high-dimensional spaces + results in random matrix theory. Applications to analysis of high- dimensional data sets and inference, as well as numerical linear algebra.

Math 225 - Scientific Computing II - Spring 2009

Office hours: by appointment.

Here is the synopsis.

Cleve Moler book (selected chapters): Numerical computing with MATLAB and Experiments with MATLAB.

Math 133 - Introduction to Partial Differential Equations - Spring 2009

Office hours: Tuesday 3-4:30pm, or by appointment.

Here is the synopsis.

First test will on February 17th (usual class time). (Solution)

Reviews session will on April 23rd 4:30pm-5:30pm, usual classroom. I will answer your questions.

Material will be added as the course proceeds. Matlab tutorials are available from Mathworks, UFL, cyclismo.org, Matlab resources and many others.

Math 378 - Minicourse - Introduction to Spectral Graph Theory and Applications - Spring 2008

We will discuss the basics of spectral graph theory, which studies random walks on graphs, and related objects such as the Laplacian and its eigenfunctions, on a weighted graph. This can be thought as a discrete analogue to spectral geometry, albeit the geometry of graphs and their discrete nature gives rise to issues not generally considered in the continuous, smooth case of Riemannian manifolds. We will present some classical connections between properties of the random walks and the geometry of the graph. We will then discuss disparate applications: the solution of sparse linear systems by multiscale methods based on random walks; analysis of large data sets (images, web pages, etc...), in particular how to find systems of coordinates on them, performing dimensionality reduction, and performing multiscale analysis on them; tasks in learning, such as spectral clustering, classification and regression on data sets.

Materials:

  • Code for the demo involving drawing a set a points, constructing an associated proximity graph, and computing and displaying eigenvalues/eigenfunctions of the Laplacian. You need to first download and install the general code for Diffusion Geometry (last updated on 1/29/08), and then download and install this code for running the demo I ran in class, with some images already prepared. After installing the Diffusion Geometry package, please run DGPaths in order to set the Matlab paths correctly. The script for the demo is called GraphEigenFcnsEx_01.m, and it is fairly extensively commented. I will be happy to add your own examples here! The code works best with the Approximate Nearest Neighbor Searching Library by D. Mount and S. Arya. To install this code, simply untar in a directory and run make. This should produce the file libANN.a in the lib subdirectory. This file is already included in the Diffusion Geometry package, in the MEX directory, compiled on a Unix machine at Duke Math. Copy this library libANN.a in the MEX directory, under the directory where the Diffusion Geometry package, and from the Matlab prompt run "mex ANNsearch libANN.a" and "mex ANNrsearch libANN.a". This will yield two .mexglx files, that are what Matlab will call. These two files are already included in the Diffusion Geometry package, after compilation on a Unix machine at Duke Math.

References:

  • R. Diestel's book on Graph Theory is an excellent general reference. Availble online here.
  • D. Spielman notes for his course on Spectral Graph Theory at Yale; several papers on specific applications, dependent on the attendant's interests.
  • D. Spielman notes on his course on Graphs and Networks at Yale. Some overlap with the above, but also other references and materials.
  • F. Chung's book "Spectral Graph Theory". She also wrote a book on "Complex Graphs and Networks", mostly on random graphs and their degree distribution properties, and also some spectral results for them. Visit her homepage for lots of interesting material on graphs, spectral graph theory and its applications. In particular see the gallery of graphs.
  • S. Lafon's web page has some cool tutorials and interactive demos on diffusion geometry.

Math 224 - Scientific Computing - Fall 2007

Office hours: Wed 4:30-5:30, Thu 1:10-2:10, or by appointment.

Here is the synopsis.

The first part of the course will cover basic numerical linear algebra, in particular matrix factorizations, solution of linear systems and eigenproblems. The second part of the course will cover nonlinear equations, numerical integration and differentiation, basic techniques for ODEs, and the Fast Fourier Transform.

Useful links:

John Trangenstein's home page contains a link to his online book on Scientific Computing, as well as several useful links to programming guides for Fortrain, C, C++ and Lapack on his page for Math 225. William Allard's home page also contains useful material, such as notes and links to online guides and materials.

Fortran tutorial: here and here.

Matlab tutorial: from Mathworks here, from the University of Florida here. Many more are available online, just use your favourite search engine to look for "matlab tutorial".

Homework sets:

1 (solution), 2 (solution), 3 (solution), 4 (solution), 5 (solution), 6 (solution), 7, 8.

Partial solution to test 1.

Fall 2007 semester schedule

Math 348 - Harmonic Analysis and Applications - Curr Res in Analysis - Spring 2007

Please find the synopsis here.

I plan to develop lecture notes as the course proceeds. Last update: 1/10/07. The notes are still in a very preliminary should be downloaded and used by students of the course only, and should not be divulgated, replicated if not for purposes related to the course. When a more stable version will become available, certain of these restrictions will be removed. This link will be updated regularly. Right now they are in an extremely preliminary state, and at times they may not even be accessible through the link provided.

A list of topics for presentation suggested for the course (by instructor or students), and the students currently working on them is available here.

Presentations by students:

Diffusion Geometry Code

See the paper Geometric diffusions as a tool for harmonic analysis and structure definition of data. part i: Diffusion maps , RR Coifman, S Lafon, A Lee, M Maggioni, B Nadler, FJ Warner, and SW Zucker. Proc. of Nat. Acad. Sci., 102:7426--7431, May 2005.

Matlab code for Diffusion Geometry.

Latest version of the code: Diffusion geometry [Updated on 7/9/16]. This version now includes covertree for fast nearest neighbor searches. It may not (yet) be compatible with all Linux versions.

Previous version of the code: Diffusion geometry [Updated on 11/12/14].

Instructions:

Unzip in a directory, preserving the subdirectory structure. Open Matlab, go in the installation directory and run Startup_DiffusionGeometry.

RunExamples contains examples for running the GraphDiffusion code, for constructing nearest neighbor graphs and computing eigenvectors of the Laplacian on such graphs.

The main script for constructing graphs and computing Laplacians and their eigenfunctions is called GraphDiffusion. Type help GraphDiffusion at the Matlab prompt to see the options, and run the example there (diffusion on a circle) to test installation.

Important note regarding examples and data sets:

Some of the examples rely on data sets, publicly available on the web, in the appropriate format: you may find them, and their source, below.

Important note regarding nearest neighbor:

The package supports either the nn_search and range_search functions of the TSToolbox and the ANNsearch functions of the Approximate Nearest Neighbor Searching Library by D. Mount and S. Arya. MEX files for some platforms are already included in the Diffusion Geometry package, in the 'NearestNeighbors' directory. If MEX files for your machine are not included, you should compile those files and make those available to Matlab by modifying its search paths as needed.

Please let me know if you encounter problems with the installation, or report successes under different systems. Thanks.

Diffusion Wavelets Code

See the paper Diffusion wavelets, RR Coifman and M Maggioni. Appl. Comp. Harm. Anal., 21(1):53--94, July 2006.

Matlab code for Diffusion Wavelets. A new, much faster version is in the works.

This was originally a joint effort with James C. Bremer Jr. and Arthur D. Szlam.

Last version of the code: Diffusion wavelets [Updated on 7/13/11].

Instructions:

Unzip in a directory, preserving the subdirectory structure. Open Matlab, go in the installation directory and run Startup_DiffusionWavelets.

RunExamples contains examples for running the DWPTree code that constructs diffusion wavelet trees on graphs.

Important note regarding examples and data sets:

The code, especially the examples, rely on the Diffusion Geometry package above and on the data sets below. All notes/comments that apply to the Diffusion Geometry package apply here as well.

Multiscale SVD Code

See the paper Multiscale Geometric Methods for Data Sets I: Multiscale SVD, Noise and Curvature, A. V. Little, M. Maggioni, L. Rosasco. This paper summarizes the work in A.V. Little's thesis (May 2011) on multi scale singular values for noisy point clouds. This extends the analysis of the constructions and results in our previous work Multiscale Estimation of Intrinsic Dimensionality of Data Sets (A.V. Little and Y.-M. Jung and M. Maggioni, Proc. AAAI, 2009) and Estimation of intrinsic dimensionality of samples from noisy low-dimensional manifolds in high dimensions with multiscale SVD (A.V. Little and J. Lee and Y.-M. Jung and M. Maggioni, Proc. SSP, 2009}.

Latest version of the code: Multiscale SVD Code [Updated on 11/29/12].

Instructions:

Unzip in a directory, preserving the subdirectory structure. Open Matlab, go in the installation directory and run Startup_MSVD.

RunExamples contains examples for running the MSVD code, including several examples in the paper.

Important note: The code requires the DiffusionGeometry package to be properly installed, as well as the Data Sets (for running certain examples).

Geometric Multi-Resolution Analysis (GMRA) Code

See the paper Geometric Multi-Resolution Analysis, W.K. Allard, G. Chen and M Maggioni. Preprint here, as well as the related ``compressive sampling'' for GMRA paper Approximation of Points on Low-Dimensional Manifolds Via Random Linear Projections, M. Iwen, M. Maggioni.

Matlab code for GMRA and Geometric Wavelet Analysis and Transforms.

Last version of the code: Geometric Multi-Resolution Analysis [Updated on 2/24/17].

Instructions:

Unzip in a directory, preserving the subdirectory structure. Open Matlab, go in the installation directory and run Startup_GMRA. This will add the required paths to Matlab. RunExamples contains examples for running the GMRA code.

Important note: The code includes the Diffusion Geometry package on which it depends. If you have Diffusion Geometry already installed, please delete the Diffusion Geometry subdirectory that gets installed with this packa.ge The code depends on Data Sets (for running certain examples). The code requires SuiteSparse and METIS to be installed; it comes with precompiled versions for OS X and Linux 64bit, in which case the installation of these packages is not required.

Compressed Sensing Inversion for GMRA Code

See the paper Approximation of points on low-dimensional manifolds via random linear projections, M. A. Iwen and M Maggioni. Preprint here.

Matlab code for Compressive Sensing inversion for GMRA.

Last version of the code: Compressive Sensing Inversion for GMRA [Updated on 6/19/13].

Instructions:

Note: The code requires the Diffusion toolbox and all the packages required for that (in particular, Diffusion Geometry; see the GMRA code section), and the SpaRSA toolbox for the performance comparisons

Some demo code I use in a class on Spectral Graph Theory

This code allows to take a black and white picture with points, and constructing an associated proximity graph, and then computing and displaying eigenvalues/eigenfunctions of the Laplacian on such graph. You need to first download and install the general code for Diffusion Geometry above and then download and install this code for running the demo I ran in class, with some images already prepared.

The script for the demo is called GraphEigenFcnsEx_01.m, and it is fairly extensively commented. I will be happy to add your own examples here!

Some data sets

Data Sets: Example Data Sets [Updated on 6/7/11].

I include here data sets from a variety of sources, transformed to Matlab format and typically pre-processed, for use with the Diffusion Geometry Code. They include variations/subsets of the MNIST data base, CBCL Face Data, Science News articles (prepared by J. Solka), Yale Face B database. Please refer to the original data on these sites if you plan to use this data, as the parts included here are only for the purpose of running examples with the packages above.

They may be installed in any directory and that directory should be added to the Matlab path, so that the packages above may find them (this works for the most recent versions of Matlab).

The material presented at this web site is partially based upon work supported by the Alfred P. Sloan Foundation, the Air Force Office of Sponsored Research, the National Science Foundation, and the Office of Naval Research. Any opinions, findings and conclusions or recomendations expressed in this material are those of the author and do not necessarily reflect the views of the granting agencies.