About

This page describes my research, background, and hobbies.

Bio

I am a graduate student at NYU Data Science working on probabilistic programming, causal inference, and graph-based deep learning. hold a Bachelor’s degree in Computer Engineering and worked at Adobe Research and the European Organisation for Nuclear Research (CERN) until recently. I’ve worked on machine learning for high-energy physics, deep learning-based recommender systems, machine learning in cybersecurity, and have a working knowledge of language models and deep convolutional neural networks.

Probabilistic Programming and COVID-19 Models

This project offers an exposition of COVID-19 modeling techniques based on the ideas and problem setup highlighted in Wood et al., (2020). We define a generative model corresponding to our intuition about epidemiological modeling using the probabilistic programming framework Pyro and apply probabilistic inference to draw insights into controlling the COVID-19 pandemic through interventions. In particular, we estimate the confidence intervals for the outbreak parameters to ensure that a predetermined goal is achieved. We are not epidemiologists; the sole aim of this study is to serve as a guide to generative modeling, not to draw inference about real-world impact of policy-making for COVID-19.

Simulation-based Inference

Simulators are causal generative models used to encode our understanding of complex phenomena. Knowledge of the causal relations embedded within them are crucial to making robust predictions. However, with high-fidelity simulations, we run into the challenge of intractable inference. Through a pedagogical example, we show that statistical inference can learn a model for a density estimation task, but it is not guaranteed to recover causal relations unless we make strong assumptions that happen to be correct. We rely on access to intervene on a simulator to recover a surrogate model which encodes the same set of conditional independence relations as the true model and can be used to predict its behaviour under intervention. My research theorizes on causal discovery and extends the solutions we build to more practical problems in the field.

Graph Neural Networks

I’ve also developed strategies using the cornerstone of artificial intelligence to advance the natural sciences. I used to work on graph-based approaches to particle track reconstruction (similar to the TrackML Challenge on Kaggle) - specifically using the representation of 3D point cloud data as a (lower-dimension) graph followed by training a graph neural network on it, possibly conditioned on additional physical information (meta data). Problems in high-energy physics and science in general prove to be a rich testbed for statistical machine learning and Bayesian inference. It is exciting to see a growing focus on making this area more practical especially as optimization toolkits and features are released within popular frameworks.

Machine Learning for Science

In 1, Mjolsness and DeCoste describe the general framework of scientific discovery: a loop involving observatory studies, hypothesis generation, experimental design, iterative testing, and feedback. Almost as if in accordance with their predictions dating two decades ago, machine learning has provided a toolkit for accelerating the scientific method. Simultaneously, it has also drawn from disciplines like quantum mechanics that provide foundational principles for techniques including functional analysis, energy-based models and derivatives of the Boltzmann distribution. John Tukey once said, “The best part about being a statistician is you get to play in everyone’s backyard.”

Deep learning has established a unique perspective to approach data-intensive stages in the scientific process with its pros and cons. While neural networks boost resource-efficiency there is the issue of interpretability since the model behaves as a ‘black box’, failing to yield a consistent rationale supporting its predictions. For science, where a formal underpinning is a prerequisite for accepting hypotheses, this poses a non-trivial issue. Science often resorts to the Bayesian perspective incorporating prior knowledge often to improve on simply “throwing” more layers and data at a model. There are proposals of physics-informed learning: conditioned on physical constraints 2 or utilising ‘physics-based’ loss functions 3. A known constraint for ML in the physical sciences is also the intractable nature of the likelihood function in complex, high-dimensional spaces. This begets strategies grounded in Bayesian statistics for instance those aimed at closely approximating the likelihood by means of sampling from the posterior using approximate Bayesian computation. Alternatively, there is a class of likelihood-free inference techniques which comes with its own set of constraints. In 4 and 5, for instance, the authors propose a neural network as a surrogate model to learn the joint likelihood ratio over a latent variable obtained from the simulation, demonstrating that this can serve as a more sample-efficient technique for certain classes of problems with an intractable likelihood.

There has been a resurgence of graph-based techniques in machine learning possibly linked to the increase in graph-structured data and compute. A recent, comprehensive review is offered by researchers at DeepMind in 6 that argues the capacity of graphs for effectively modeling inter-object relationships. As part of my work on particle track reconstruction graphical representations can encode entity-relationships into lower dimensional latent spaces; 7 presents a survey of approaches to this end. I believe, as computational feasibility no longer remains a bottleneck, that graph-based models present a promising approach to learn complex relationships over large datasets.

Natural Language Processing, 3D Modeling

I’ve built projects to demonstrate my understanding of a topic whilst exploring a variety of domains; a few examples of these include a framework to prototype chatbots with context-based question-answering models based on Jack the Reader (ACL, 2018), an automated assessment of basic Blender (3D Modeling) Assignments, and a dynamic, automated workflow for an award-winning cybersecurity tool, IllusionBlack.

Knowledge Transfer: DJ Unicode

I am passionate about knowledge transfer actively working with a student-run organisation that I co-founded - DJ Unicode. Unicode was born of the need for skill development at the grassroots level in addition to the need for a rapport between college freshmen, sophomores, and juniors at universities that don’t offer such opportunities by means of the coourse structure. Our aim is to extend the ‘summer-of-code’ workflow to the rest of the year helping our students to build a strong foundational understanding of software development. I’m leading the expansion of our mentorship into teaching math and statistics for machine learning through comprehensive reading groups on standard texts in the subject.

Unicode started with 15-20 students separated into 5 teams based on their projects. Today, we are a thriving community of 140+ members, with teams winning hackathons, students receiving international internship offers, multiple selections for Google Summer of Code each year, and alumni at Ivy League universities and FAANG companies in the USA!

Personal

  • Look up my tech articles published in the Open Source for You (OSFY) Magazine

  • I (like to think that I) am an artist.

  • Apart from cooking, and biking, I spend time:
  • I enjoy participating in hackathons where you are likely to find me scrounging food around midnight. I’m partial to a steaming cup of sweet, milky tea (also termed ‘cutting chai’ by the Indian streetside tea stalls).

  • I like to run the occasional marathon.

Sites

Feel free to browse through some of my older posts on Blogger and The CCDev Blog.