SUrge Contributes to MicroBooNE Collaboration Research

Jessica Esquivel

Physics PhD candidate Jessica Equivel discusses her experience being stationed at Fermilab and using Syracuse University’s SUrge GPU Cluster.

Jessica, please tell us about yourself

I’m a PhD candidate in the physics department in the Experimental Neutrino Physics group working under Mitch Soderberg.  I’m in my 6th year and plan on graduating within the next couple of months!I am currently working on post-doc applications and writing my thesis.

What is the relationship between the Experimental Neutrino Physics group (Department of Physics, Syracuse University) and where you’re currently stationed at Fermilab?

The experimental neutrino physics group, specifically myself and those working under Mitch Soderberg are collaborators of the MicroBooNE experiment (among other LArTPC detectors) stationed at Fermilab. As a graduate student, I’ve worked on many projects benefiting the MicroBooNE collaboration including writing an algorithm to find the first neutrinos detected in MicroBooNE

Read SU Arts and Sciences news article.

How long have you been stationed at Fermilab and what are your responsibilities there?

I’ve been stationed at Fermilab since September of 2015.  I’ve focused my research on improving the muon neutrino charged current (cc) inclusive cross section measurement in MicroBooNE using Convolutional Neural Networks to separate muons and pions.

A muon neutrino cc-inclusive interaction produces a muon plus other charged particles. A background to the muon neutrino cc-inclusive events is a neutral current interaction that produces a pion plus other charged particles. These two interactions look very similar in MicroBooNE and up till now, the way CC signal and NC background were separated was by a 75 cm track length cut.   Pions interact and stop along it’s route at shorter distances than muons do, their interaction distance is approx 75 cm, hence the 75 cm cut. This cut however affects low energy cc-inclusive events.  Training a neural network to learn differences between muons and pions (other than track length) can increase our acceptance rate of low energy cc-inclusive events.

On top of my research duties, I was elected as an officer for the Fermilab Student and Postdoc Association (FSPA) for the year 2016-17. As an officer we hosted events to foster community among the young graduate students and postdocs. We also represented Fermilab users at Washington DC where we set up meetings with members of Congress and their staff to talk about the importance of high energy physics (HEP) as well as stable funding for HEP.

FSPA changes guard for 2016-17

Can you tell us about the Bird Plot animation on Neutrino.syr.edu website and how the SU GPU computing resource “SUrge” was used to create it?

Neutrino Bird PlotThe animation is called t-distributed stochastic neighbor embedding (t-SNE).

It’s a machine algorithm that minimizes dimensions onto a 2D plot. I’m using it to visualize how the trained Convolutional Neural Network (CNN) is learning. CNNs have weights and biases per layer that are updated during every training iteration. The amount of weights and biases per layer as well as the amount of layers are tunable so the amount of dimensions in a CNN can get very large.

A t-SNE reduces all these dimensions to 2D and places similar datapoints next to other similar datapoints. In my case, it is showing my training images which are images of muons, pions, protons, electrons and gammas.

In the graphic, muons, pions and protons are close to each other while electrons and gammas are close to each other but far away from muons, pions and protons. In our detector, muons pions and protons look very similar so it makes sense the CNN groups these are in close proximity to each other while electrons and gammas are grouped in close proximity because these also look similar to each other in our detector.

Training a CNN is very computationally intensive and the size of the images have a large memory footprint so SUrge was instrumental to this analysis! I trained the CNN on 100,000 images whose size were 576×576 pixels. The network architecture I used was GoogleNet which is a very deep network and is currently the leading deep learning network architecture.

Before SUrge, it was impossible to train such a deep network with the size and amount of images needed for the network to learn. I only did a 2 particle CNN with a smaller architecture and images sized 224×224 pixels.

What was involved in creating animations like this before SUrge was available?

My architecture was smaller with smaller cropped images.  Running it took weeks on my previous machine compared to approximately 8 hours using SUrge.  With SUrge I was also able to do hyperparameter optimization to make sure I was implementing the best parameters for training on my data.