Postdoc at the McGovern Institute for Brain Research
MIT
Hey there!
I am a postdoc at the McGovern Institute for Brain Research at MIT, working with Nancy Kanwisher. I use techniques from artificial intelligence and functional imaging to understand how the brain processes the external world. I am most interested in challenges at the intersection of neuroscience and large-scale data analysis. My postdoctoral research focuses on developing interpretable machine learning tools to understand structured neural representations in the human visual cortex and deep neural networks. During my PhD, I also worked more broadly at the intersection of machine learning and neuroimaging, developing predictive models to understand the distinctive characteristics of the brains of people affected with different mental disorders.
PhD in Electrical and Computer Engineering, 2017-2021
Cornell University
B. Tech - M. Tech dual degree in Electrical Engineering, 2011-2016
Indian Institute of Technology, Kanpur, India
Recent news
[Oct'22] Pleasure to participate in a fun discussion on food-selective neural responses at the Quantum Photonics Clubhouse! [Listen here]!
[Oct'22] Talked about ‘Food on the brain’ at the Cambridge Science Festival, MIT Museum
[Sep'22] Paper on ‘Characterizing the Ventral Visual Stream with Response-Optimized Neural Encoding Models’ accepted at NeurIPS'22!
[Aug'22] ‘A highly selective response to food in human visual cortex revealed by hypothesis-free voxel decomposition’ accepted to Current Biology! paper. Featured at The Guardian!
[Aug'22] Presented our work on food-selectivity in the ventral visual cortex at CCN! [paper]
[May'22] Gave an oral presentation on our work entitled ‘Data-driven component modeling reveals the functional organization of high-level visual cortex’ at VSS!
[Mar'22] Presented our work entitled ‘Hypothesis-neutral models of higher-order visual cortex reveal strong semantic selectivity’ at Cosyne! [Abstract]
[Dec'21] Shared our work on `emergent semantic selectivity in hypothesis-neural response-optimized models of high-level visual cortex’ in an oral presentation at Neuromatch 4.0
[Sep'21] Started a postdoctoral position at the McGovern Institute for Brain Research, working with Nancy Kanwisher
[Jul'21] Defended my thesis!
[Apr'21] Gave a talk at the Biomedical Image Computing series, ETH Zurich on ‘Predicting cortical responses to naturalistic stimuli using deep learning’
[Jul'20] Co-led a breakout session on “Machine Learning for Neuroimaging” with Elvisha Dhamala and Carmen Khoo in the Women in Machine Learning un-workshop @ ICML'20.
[Jun'20] Presented our research poster on “holistic neural encoding with multi-modal naturalistic stimuli” at OHBM 2020. The poster and a quick video walkthrough is also available at
https://doi.org/10.5281/zenodo.3894420
Selected Publications
Neural encoding with visual attention
Meenakshi Khosla, Gia H. Ngo, Keith Jamison, Amy Kuceyeski and Mert R. Sabuncu To appear in NeurIPS 2020[arXiv]
Abstract: Visual perception is critically influenced by the focus of attention. Due to limited resources, it is well known that neural representations are biased in favor of attended locations. Using concurrent eye-tracking and functional Magnetic Resonance Imaging (fMRI) recordings from a large cohort of human subjects watching movies,
we first demonstrate that leveraging gaze information, in the form of attentional
masking, can significantly improve brain response prediction accuracy in a neural
encoding model. Next, we propose a novel approach to neural encoding by including a trainable soft-attention module. Using our new approach, we demonstrate
that it is possible to learn visual attention policies by end-to-end learning merely
on fMRI response data, and without relying on any eye-tracking. Interestingly, we
find that attention locations estimated by the model on independent data agree well
with the corresponding eye fixation patterns, despite no explicit supervision to do
so. Together, these findings suggest that attention modules can be instrumental in
neural encoding models of visual stimuli.
A shared neural encoding model for the prediction of subject-specific fMRI response
Meenakshi Khosla, Gia H. Ngo, Keith Jamison, Amy Kuceyeski and Mert R. Sabuncu To appear in the proceedings of MICCAI 2020[arXiv][code] [talk]
Abstract: The increasing popularity of naturalistic paradigms in fMRI (such as movie watching) demands novel strategies for multi-subject data analysis, such as use of neural encoding models. In the present study, we propose a shared convolutional neural encoding method that accounts for individual-level differences. Our method leverages multi-subject data to improve the prediction of subject-specific responses evoked by visual or auditory stimuli. We showcase our approach on high-resolution 7T fMRI data from the Human Connectome Project movie-watching protocol and demonstrate significant improvement over single-subject encoding models. We further demonstrate the ability of the shared encoding model to successfully capture meaningful individual differences in response to traditional task-based facial and scenes stimuli. Taken together, our findings suggest that inter-subject knowledge transfer can be beneficial to subject-specific predictive models.
Ensemble learning with 3D convolutional neural networks for functional connectome-based prediction
Meenakshi Khosla, Keith Jamison, Amy Kuceyeski and Mert R. Sabuncu NeuroImage[pub][arXiv][Code]
Short abstract: In this study, we critically evaluate the effect of brain parcellations on machine learning models applied to rs-fMRI data. Our experiments reveal an intriguing trend: On average, models with stochastic parcellations consistently perform as well as models with widely used atlases at the same spatial scale. We thus propose an ensemble learning strategy to combine the predictions from models trained on connectivity data extracted using different (e.g., stochastic) parcellations. We further present an implementation of our ensemble learning strategy with
a novel 3D Convolutional Neural Network (CNN) approach. This overcomes the limitations of traditional machine learning models
for connectomes that often rely on region-based summary statistics and/or
linear models. We showcase our approach on a classification (autism patients
versus healthy controls) and a regression problem (prediction of subject’s
age), and report promising results.
Machine learning in resting-state fMRI analysis
Meenakshi Khosla, Keith Jamison, Gia H. Ngo, Amy Kuceyeski and Mert R. Sabuncu Magnetic Resonance Imaging: Special Issue in Machine Learning[pub][arXiv]
Short abstract: Here, we present an overview of various unsupervised and supervised machine learning applications to rs-fMRI. We offer a methodical taxonomy of machine learning methods in resting-state fMRI. We identify three major divisions of unsupervised learning methods with regard to their applications to rs-fMRI, based on whether they discover principal modes of variation across space, time or population. Next, we survey the algorithms and rs-fMRI feature representations that have driven the success of supervised subject-level predictions. The goal is to provide a high-level overview of the burgeoning field of rs-fMRI from the perspective of machine learning applications.
Detecting abnormalities in resting-state dynamics: An unsupervised learning approach
Meenakshi Khosla, Keith Jamison, Amy Kuceyeski and Mert R. Sabuncu MLMI @ MICCAI2019[pub][arXiv]
Short abstract: In this paper, we explore two strategies for capturing the normal variability in resting-state activity across a healthy population: (a) an autoencoder approach on the rs-fMRI sequence, and (b) a next frame prediction strategy. We show that both approaches can learn useful representations of rs-fMRI data and demonstrate their novel application for abnormality detection in the context of discriminating autism patients from healthy controls.
3D convolutional neural networks for classification of functional connectomes
Meenakshi Khosla, Keith Jamison, Amy Kuceyeski and Mert R. Sabuncu DLMIA @ MICCAI2018[pub][arXiv]
Short abstract: In this work, we propose a novel volumetric Convolutional Neural Network (CNN) framework that takes advantage of the full-resolution 3D spatial structure of rs-fMRI data and fits non-linear predictive models. We showcase our approach on a challenging large-scale dataset (ABIDE, with $N>2,000$) and report state-of-the-art accuracy results on rs-fMRI-based discrimination of autism patients and healthy controls.
Short Projects
Machine learning methods for seizure detection [report]
This project was implemented as part of ECE5040 under the guidance of Prof. Mahsa Shoaran.
Here, we explored the utility of various feature sets extracted form intracranial EEG recordings (time-domain and frequency-domain) as well as different machine learning algorithms for automated seizure detection.
A graph-based approach to estimate mutual information[report]
This project was implemented as part of ECE6970 under the guidance of Prof. Ziv Goldfeld.
Here, we presented a novel approach to estimate mutual information between input data and internal representations of a neural network that relies solely on the neighborhood graph of these representations.
Prediction of longitudinal evolution of Alzheimer’s Disease [report]
This project was implemented as part of ECE5970 under the guidance of Prof. Mert Sabuncu.
Here, we implemented several algorithms to predict future disease states and clinical scores of patients from multi-modal imaging data, including functional principal component analysis, linear and non-linear mixed effect models and random forests.
Bayesian nonparametric extensions of Hidden Markov Models [report]
This project was implemented as part of ORIE6780 under the guidance of Prof. David Ruppert.
Here, we reviewed bayesian nonparametric models for time-series data and discussed the evolution of their inference algorithms.