2270 Publications

Foundations of visual form selectivity in macaque areas V1 and V2

T. D. Oleskiw , Justin D. Lieber, E. P. Simoncelli, J. A. Movshon

Neurons early in the primate visual cortical pathway generate responses by combining signals from other neurons: some from downstream areas, some from within the same area, and others from areas upstream. Here we develop a model that selectively combines afferents derived from a population model of V1 cells. We use this model to account for responses we recorded of both V1 and V2 neurons in awake fixating macaque monkeys to stimuli composed of a sparse collection of locally oriented features ("droplets") designed to drive subsets of V1 neurons. The first stage computes the rectified responses of a fixed population of oriented filters at different scales that cover the visual field. The second stage computes a weighted combination of these first-stage responses, followed by a final nonlinearity, with parameters optimized to fit data from physiological recordings and constrained to encourage sparsity and locality. The fitted model accounts for the responses of both V1 and V2 neurons, capturing an average of 43% of the explainable variance for V1 and 38% for V2. The models fitted to droplet recordings predict responses to classical stimuli, such as gratings of different orientations and spatial frequencies, as well as to textures of different spectral content, which are known to be especially effective in driving V2. The models are less effective, however, at capturing the selectivity of responses to textures that include naturalistic image statistics. The pattern of afferents {\textemdash} defined by their weights over the 4 dimensions of spatial position, orientation, and spatial frequency {\textemdash} provides a common and interpretable characterization of the origin of many neuronal response properties in the early visual cortex.Competing Interest StatementThe authors have declared no competing interest.

Show Abstract

Discrete Lehmann representation of three-point functions

Dominik Kiese, Hugo U. R. Strand, Kun Chen, Nils Wentzell, Olivier Parcollet, J. Kaye

We present a generalization of the discrete Lehmann representation (DLR) to three-point correlation and vertex functions in imaginary time and Matsubara frequency. The representation takes the form of a linear combination of judiciously chosen exponentials in imaginary time, and products of simple poles in Matsubara frequency, which are universal for a given temperature and energy cutoff. We present a systematic algorithm to generate compact sampling grids, from which the coefficients of such an expansion can be obtained by solving a linear system. We show that the explicit form of the representation can be used to evaluate diagrammatic expressions involving infinite Matsubara sums, such as polarization functions or self-energies, with controllable, high-order accuracy. This collection of techniques establishes a framework through which methods involving three-point objects can be implemented robustly, with a substantially reduced computational cost and memory footprint.

Show Abstract

Contrastive-equivariant self-supervised learning improves alignment with primate visual area IT

Models trained with self-supervised learning objectives have recently matched or surpassed models trained with traditional supervised object recognition in their ability to predict neural responses of object-selective neurons in the primate visual system. A self-supervised learning objective is arguably a more biologically plausible organizing principle, as the optimization does not require a large number of labeled examples. However, typical self-supervised objectives may result in network representations that are overly invariant to changes in the input. Here, we show that a representation with structured variability to input transformations is better aligned with known features of visual perception and neural computation. We introduce a novel framework for converting standard invariant SSL losses into “contrastive-equivariant” versions that encourage preservation of input transformations without supervised access to the transformation parameters. We demonstrate that our proposed method systematically increases the ability of models to predict responses in macaque inferior temporal cortex. Our results demonstrate the promise of incorporating known features of neural computation into task-optimization for building better models of visual cortex.

Show Abstract

Learning predictable and robust neural representations by straightening image sequences

X Niu, Cristina Savin, E. P. Simoncelli

Prediction is a fundamental capability of all living organisms, and has been proposed as an objective for learning sensory representations. Recent work demonstrates that in primate visual systems, prediction is facilitated by neural representations that follow straighter temporal trajectories than their initial photoreceptor encoding, which allows for prediction by linear extrapolation. Inspired by these experimental findings, we develop a self-supervised learning (SSL) objective that explicitly quantifies and promotes straightening. We demonstrate the power of his objective in training deep feedforward neural networks on smoothly-rendered synthetic image sequences that mimic commonly-occurring properties of natural videos. The learned model contains neural embeddings that are predictive, but also factorize the geometric, photometric, and semantic attributes of objects. The representations also prove more robust to noise and adversarial attacks compared to previous SSL methods that optimize for invariance to random augmentations. Moreover, these beneficial properties can be transferred to other training procedures by using the straightening objective as a regularizer, suggesting a broader utility of straightening as a principle for robust unsupervised learning.

Show Abstract

Shaping the distribution of neural responses with interneurons in a recurrent circuit model

D. Lipshutz, E. P. Simoncelli

Efficient coding theory posits that sensory circuits transform natural signals into neural representations that maximize information transmission subject to resource constraints. Local interneurons are thought to play an important role in these transformations, dynamically shaping patterns of local circuit activity to facilitate and direct information flow. However, the relationship between these coordinated, nonlinear, circuit-level transformations and the properties of interneurons (e.g., connectivity, activation functions, response dynamics) remains unknown. Here, we propose a normative computational model that establishes such a relationship. Our model is derived from an optimal transport objective that conceptualizes the circuit’s input-response function as transforming the inputs to achieve an efficient target response distribution. The circuit, which is comprised of primary neurons that are recurrently connected to a set of local interneurons, continuously optimizes this objective by dynamically adjusting both the synaptic connections between neurons as well as the interneuron activation functions. In an example application motivated by redundancy reduction, we construct a circuit that learns a dynamical nonlinear transformation that maps natural image data to a spherical Gaussian, significantly reducing statistical dependencies in neural responses. Overall, our results provide a framework in which the distribution of circuit responses is systematically and nonlinearly controlled by adjustment of interneuron connectivity and activation functions.

Show Abstract

Disentangling Interacting Systems with Fermionic Gaussian Circuits: Application to the Single Impurity Anderson Model

Ang-Kun Wu, B. Kloss, Wladislaw Krinitsin, M. Fishman, J. Pixley, M. Stoudenmire

Tensor network quantum states are powerful tools for strongly correlated systems, tailored to capture local correlations such as in ground states with entanglement area laws. When applying tensor network states to interacting fermionic systems, a proper choice of the basis or orbitals can reduce the bond dimension of tensors and provide physically relevant orbitals. We introduce such a change of basis with unitary gates obtained from compressing fermionic Gaussian states into quantum circuits corresponding to various tensor networks. These circuits can reduce the ground-state entanglement entropy and improve the performance of algorithms such as the density matrix renormalization group. We study the Anderson impurity model with one and two impurities to show the potential of the method for improving computational efficiency and interpreting impurity physics. Furthermore, fermionic Gaussian circuits can also suppress entanglement during the time evolution out of low-energy state. Last, we consider Gaussian multiscale entanglement renormalization ansatz (GMERA) circuits which compress fermionic Gaussian states hierarchically. The emergent coarse-grained physical models from these GMERA circuits are studied in terms of their entanglement properties and suitability for performing time evolution.

Show Abstract

Geometric model for dynamics of motor-driven centrosomal asters

Yuan-Nan Young, Vicente Gomez Herrera, Huan Zhang, R. Farhadifar, M. Shelley

The centrosomal aster is a mobile and adaptable cellular organelle that exerts and transmits forces necessary for tasks such as nuclear migration and spindle positioning. Recent experimental and theoretical studies of nematode and human cells demonstrate that pulling forces on asters by cortically anchored force generators are dominant during such processes. Here, we present a comprehensive investigation of the S-model (S for stoichiometry) of aster dynamics based solely on such forces. The model evolves the astral centrosome position, a probability field of cell-surface motor occupancy by centrosomal microtubules (under an assumption of stoichiometric binding), and free boundaries of unattached, growing microtubules. We show how cell shape affects the stability of centering of the aster, and its transition to oscillations with increasing motor number. Seeking to understand observations in single-cell nematode embryos, we use highly accurate simulations to examine the nonlinear structures of the bifurcations, and demonstrate the importance of binding domain overlap to interpreting genetic perturbation experiments. We find a generally rich dynamical landscape, dependent upon cell shape, such as internal constant-velocity equatorial orbits of asters that can be seen as traveling wave solutions. Finally, we study the interactions of multiple asters which we demonstrate an effective mutual repulsion due to their competition for surface force generators. We find, amazingly, that centrosomes can relax onto the vertices of platonic and nonplatonic solids, very closely mirroring the results of the classical Thomson problem for energy-minimizing configurations of electrons constrained to a sphere and interacting via repulsive Coulomb potentials. Our findings both explain experimental observations, providing insights into the mechanisms governing spindle positioning and cell division dynamics, and show the possibility of new nonlinear phenomena in cell biology.

Show Abstract

The 2024 New York City Integrative Structural Biology Symposium

P. Cossio, Edward T. Eng

The 2024 New York City Integrative Structural Biology Symposium focused on understanding the challenges and opportunities of applying integrative structural biology techniques to biomedical research. To foster connections across different fields and disciplines, this symposium offered hands-on workshops. These workshops provided attendees an opportunity to use state-of-the-art instrumentation and software programs in the structural biology sciences that they may not have access to in their own laboratories. Moreover, the symposium provided a vibrant environment for scientific discourse where cutting-edge research talks presented the trends in integrative structural biology in the New York City area. In this TrendsTalk, the symposium organizers bring to you the highlights of the workshops and scientific sections from this event.

Show Abstract

Equispaced Fourier representations for efficient Gaussian process regression from a billion data points

Philip Greengard, M. Rachh, A. Barnett

We introduce a Fourier-based fast algorithm for Gaussian process regression in low dimensions. It approximates a translationally invariant covariance kernel by complex exponentials on an equispaced Cartesian frequency grid of \(M\) nodes. This results in a weight-space \(M\times M\) system matrix with Toeplitz structure, which can thus be applied to a vector in \({\mathcal O}(M \log{M})\) operations via the fast Fourier transform (FFT), independent of the number of data points \(N\). The linear system can be set up in \({\mathcal O}(N+M \log{M})\) operations using nonuniform FFTs. This enables efficient massive-scale regression via an iterative solver, even for kernels with fat-tailed spectral densities (large \(M\)). We provide bounds on both kernel approximation and posterior mean errors. Numerical experiments for squared-exponential and Matérn kernels in one, two, and three dimensions often show 1–2 orders of magnitude acceleration over state-of-the-art rank-structured solvers at comparable accuracy. Our method allows two-dimensional Matérn-\(\frac{3}{2}\) regression from \(N=10^9\) data points to be performed in two minutes on a standard desktop, with posterior mean accuracy \(10^{-3}\). This opens up spatial statistics applications 100 times larger than previously possible.

Show Abstract

Approximate Contraction of Arbitrary Tensor Networks with a Flexible and Efficient Density Matrix Algorithm

Linjian Ma, M. Fishman, M. Stoudenmire, Edgar Solomonik

Tensor network contractions are widely used in statistical physics, quantum computing, and computer science. We introduce a method to efficiently approximate tensor network contractions using low-rank approximations, where each intermediate tensor generated during the contractions is approximated as a low-rank binary tree tensor network. The proposed algorithm has the flexibility to incorporate a large portion of the environment when performing low-rank approximations, which can lead to high accuracy for a given rank. Here, the environment refers to the remaining set of tensors in the network, and low-rank approximations with larger environments can generally provide higher accuracy. For contracting tensor networks defined on lattices, the proposed algorithm can be viewed as a generalization of the standard boundary-based algorithms. In addition, the algorithm includes a cost-efficient density matrix algorithm for approximating a tensor network with a general graph structure into a tree structure, whose computational cost is asymptotically upper-bounded by that of the standard algorithm that uses canonicalization. Experimental results indicate that the proposed technique outperforms previously proposed approximate tensor network contraction algorithms for multiple problems in terms of both accuracy and efficiency.

Show Abstract
  • Previous Page
  • Viewing
  • Next Page
Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates