Brief Bio

(CV, DBLP, Google Scholar)

 

 

I am a Research Scientist in the Advanced Computing, Mathematics and Data Division at Pacific Northwest National Laboratory. My research intrests are in designing scalable, fault tolerant and energy efficient programming models, Machine Learning and Data Mining (MLDM) algorithms. A by-product of our research in programming models is Communication Runtime for Extreme Scale (ComEx). ComEx is released with Global Arrays.

 

More recently, I have been actively conducting research in designing scalable MLDM algorithms. A few examples include Support Vector Machines (SVM), Frequent Pattern Mining (FP-Growth) and several others such as K-Nearest Neighbors (k-NN), k-means using MPI and PGAS models, such as Global Arrays. The research is integrated in Machine Learning Toolkit for Extreme Scale (MaTEx).

 

Collaboration Opportunities

 

I am actively looking for collaborators and students, who are interested in designing scalable programming models and MLDM algorithms. If you are interested, please feel free to contact me.

News

 

  • Jan'2015: Abhinav Vishnu attended and participated as a panelist at DOE Machine Learning Workshop.
  • Nov'2014: Our abstract on scalable Machine Learning on Extreme Scale Systems is accepted at DOE Machine Learning Workshop.
  • Nov'2014: Our paper on network contention with PGAS models is accepted for publication at PPoPP'2015.
  • Oct'2014: Our poster on one-sided message contention with PGAS models is nominated for best poster in SC'14.
  • Sep'2014: Abhinav Vishnu was invited to contribute to Dagstuhl seminar on Resilience in Extreme Scale Systems.
  • Aug'2014: Our paper on with scalable sequence homology detection is accepted for publication at JPDC.