-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathlongresume.tex
More file actions
561 lines (423 loc) · 30.7 KB
/
longresume.tex
File metadata and controls
561 lines (423 loc) · 30.7 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
% Start a document with the here given default font size and paper size.
\documentclass[10pt,a4paper]{article}
% Set the page margins.
\usepackage[a4paper,margin=0.75in]{geometry}
% Setup the language.
\usepackage[english]{babel}
\hyphenation{Some-long-word}
% Makes resume-specific commands available.
\usepackage{resume}
\usepackage{verbatim}
\begin{document} % begin the content of the document
\sloppy % this to relax whitespacing in favour of straight margins
% title on top of the document
\maintitle{Vishal Pramod Kasliwal}{}{Last update on \today}
\nobreakvspace{0.3em} % add some page break averse vertical spacing
% \noindent prevents paragraph's first lines from indenting
% \mbox is used to obfuscate the email address
% \sbull is a spaced bullet
% \href well..
% \\ breaks the line into a new paragraph
\noindent\href{mailto:vishal.dot.kasliwal.at.gmail.dot.com}{vishal.kasliwal\mbox{}@\mbox{}gmail.com}\sbull
% \noindent\href{mailto:vishal.at.wavecomp.dot.com}{vishal\mbox{}@\mbox{}wavecomp.com}\sbull
\textsmaller{+}1.267.206.9287\sbull
%{\newnums cies010} \emph{(Skype)}\sbull
\href{https://github.com/AstroVPK}{https://github.com/AstroVPK}
\\
6289 Mahan Dr.\sbull
San Jose, CA 95123.\sbull
USA
\\
US Permanent Resident.\sbull
Indian Citizen
\\
Fluent in English \& Hindi
\\
\spacedhrule{0.4em}{0.2em} % a horizontal line with some vertical spacing before and after
\roottitle{Summary} % a root section title
\vspace{-1.3em} % some vertical spacing
\begin{multicols}{2} % open a multicolumn environment
\noindent \emph{Imaginative former astrophysicist with a passion and insaitiable appetite for building highly scalable AI-systems.}
\\
\\
Two AI hardware startups (Luminous Computing \& Wave Computing) and two AI chip behemoths (Intel Corp. \& AMD Corp.) have taught me that AI systems are incredibly hard to design \& build correctly. Having left my roots in in Astrophysics \& High Performance Computing in 2017, I've learned my way around the process of designing chips and systems to accelerate AI for both training as well as inference. My experiences at chip startups and large established corporations have left me with a healthy respect for teamwork. My previous role at Luminous Computing was focused on technical product definition \& performance architecture. It gave me the chance to interact not only with potential customers to better understand their needs \& requirements, but also at a deep technical level with the engineering team to create a truly performant AI system for powering the future. My current role at AMD is focused on developing highly performant software libraries for AMD's CDNA architecture GPGPUs.
% Having fallen in love with Astronomy at the age of four, I was spurred by my family to follow my passion for astrophysics, mathematics, \& computing.
% Coming from a {\large \sc gw basic} background in high-school, I learned {\large \sc idl} to search for non-Gaussianity in the Cosmic Microwave Background for my senior research project at the University of Richmond (2005). My {\large \sc idl} skills found application developing an automated image-analysis pipeline for studying the oxygen etching of silicon surfaces while obtaining my Master's degree at Virginia Commonwealth University (2007).
% After beginning my doctorate at Drexel University, I used a variety of techniques, including machine learning with neural nets, to quantify the accuracy with which the Large Synoptic Survey Telescope (LSST) will measure galactic distances. The sheer volume of data generated by NASA's {\it Kepler} mission led to me programming Intel's Xeon Phi accelerator cards to study the variability of Active Galactic Nuclei for my PhD dissertation (2015).
% My first postdoc with the \href{http://dm.lsst.org/}{LSST project} exposed me to the challenges of developing a high-performance software stack (15 TB/night over 10 years of raw data, \CPP \ \& Python, 50+ developers) in a professional Agile software development environment with an emphasis on algorithmic correctness \& coding standards. At the same time, working with \href{http://www.physics.upenn.edu/people/standing-faculty/adam-lidz}{Adam Lidz} at Upenn, I extended my \CPP \& Python package \href{https://github.com/AstroVPK/kali}{\textsc{k\={a}l\={i}}} to analyzing periodically-beamed stochastic light curves generated by dual black-hole systems in distant galaxies.
% Bitten by the high-performance computing (HPC) bug, I recently began a research engineer position at a Si valley company, Colfax International, that provides HPC software consulting to a broad swath of clients from industry.
\end{multicols}
\spacedhrule{0em}{-0.4em}
\roottitle{Experience - Industry}
\headedsection
{\href{https://www.intel.com/content/www/us/en/homepage.html}{Intel Corporation - Senior Staff GPU Compute Architect}}
{\textsc{Santa Clara, CA}} {
\headedsubsection
{GPU Compute Architect}
{May \apo24 -- present}
{\bodytext{
I drive end-to-end GPU compute hardware/software codesign. My responsibilities include
\begin{itemize}
\item Building expertise in workloads in AI and HPC to drive compute architecture
\item Analyzing performance of these workloads, determining bottlenecks and using these insights to drive compute architecture
\item Specifing and building the tools needed for such analysis
\item “Hands on” ML kernel/operator optimization for Windows and Linux ecosystem (DirectML, OpenVINO, SYCL)
\item Working closely with higher level software stacks (AI frameworks, high level language compilers and runtime) to drive end-to-end performance
\item Transitioning the architectural specifications and prototype code to software engineering, oversee development and follow through to product deployment and support
\item Interfacing for LevelZero in partnership with other compute architects
\end{itemize}
}}
}
\headedsection
{\href{https://www.amd.com/en.html}{Advanced Micro Devices (AMD) - Senior Member of Technical Staff}}
{\textsc{Santa Clara, CA}} {
% \bodytext{I worked directly with the CTO at Luminous Computing. My role focused on translating marker requirements into a technical defintion of Luminous' product \& then \& architecting the system for performance.}
\headedsubsection
{ML Performance Optimization}
{Aug \apo23 -- May \apo24}
{\bodytext{
I work on AMD's Machine Learning Libraries to accelerate AI performance on AMD's CDNA Data Center GPGPUS (MI100x, MI200x, etc...). My responsibilities included
\begin{itemize}
\item optimizing low-precision GEMM kernels, i.e. 8-bit floating-point (E4M3 \& E5M2), 8-bit integer, and 16-bit floating point (bf16 \& fp16), for multiple generations of AMD's CDNA architecture.
\item identifying \& eliminating performance bottlenecks via runtime profiling.
\end{itemize}
I work with a geographically distributed team spread over two continents \& use modern software development practices to manage the project.
}}
}
\headedsection % sets the header for the section and includes any subsections
{\href{https://www.https://www.luminous.com/}{Luminous Computing - Office of the CTO}}
{\textsc{Santa Clara, CA}} {
% \bodytext{I worked directly with the CTO at Luminous Computing. My role focused on translating marker requirements into a technical defintion of Luminous' product \& then \& architecting the system for performance.}
\headedsubsection
{Technical Product Definition \& Performance Architecture}
{Apr \apo22 -- May \apo23}
{\bodytext{
I worked directly with the CTO at Luminous Computing on determining \& defining what Luminous would build. I focused on a combination of technical product definition, performance architecture \& hardware/software co-development. My responsibilities included
\begin{itemize}
\item engaging with external customers to understand the evolving needs of the marketplace, expectations of system \& software behavior.
\item identifying \& collating key AI workloads of interest.
\item analyzing workloads with the goal of understanding workload characteristics.
\item mapping the workload into Luminous Computing's software \& hardware.
\item building analytical performance models to determine expected system performance, discover system- and component-level bottlenecks, \& problems.
\item co-architecting hardware \& software features to improve workload performance.
\item working with the engineering team to productize recommended architectural improvements.
\item assuring that what the engineers are building - cutting across the software stack, the digital architecture, the compiler, and the hardware - is consistent with the intended goals of the product requirements and product value proposition.
\item identifying key system figures-of-merit (FOM) \& bottlenecks and utilizing this knowledge to define a forward technology road-map for future products that address these bottlenecks.
\end{itemize}
I worked with the engineering team on the chip \& system architecture as it developed. I also worked on future iterations of Luminous’ product and defined the forward road-map of the company.
}}
}
\headedsection % sets the header for the section and includes any subsections
{\href{https://www.intel.com/content/www/us/en/homepage.html}{Intel Corporation - Senior GPU Software Architect}}
{\textsc{Santa Clara, CA}} {
\bodytext{I worked in the Software Architecture group within AXG. My role focused on architecting Intel's Level Zero GPU driver.}
\headedsubsection
{Level Zero Architecture}
{August \apo21 -- April \apo22}
{\bodytext{My responsibilities included
\begin{itemize}
\item defining the software architecture of the Level Zero GPU driver.
\item defining new architectural features for enhancing Intel's GPU architecture for HPC \& Deep Learning workloads.
\item driving adoption of best practices, i.e. architecture-specific know-how, in software products.
\end{itemize}
}}
}
\headedsection % sets the header for the section and includes any subsections
{\href{https://www.intel.com/content/www/us/en/homepage.html}{Intel Corporation - Senior Deep Learning Software Engineer}}
{\textsc{Santa Clara, CA}} {
\bodytext{I worked in the Machine Learning Performance (MLP) organization in the Machine Learning Distributed Compute (MLDC) group on accelerating Deep Learning workloads on Intel's Xe-HPC discrete accelerator cards for Deep Learning \& High-Performance Computing. My role focused on pre-Si performance optimization via hardware-software co-design. I developed \& tested computation- \& communication-kernels using hardware simulators.}
\headedsubsection
{Hardware-Software Co-Design}
{April \apo19 -- August \apo21}
{\bodytext{My responsibilities included
\begin{itemize}
\item defining architectural features for enhancing the architecture for Deep Learning workloads.
\item driving adoption of best practices, i.e. architecture-specific know-how, in software products.
\item evaluating \& projecting pre-Si Deep Learning workload performance.
\item creating the overall strategy for distributing Deep Learning workloads across multiple-cards \& -nodes i.e. scale-up/scale-out strategy.
\end{itemize}
}}
\headedsubsection
{oneCCL Development}
{October \apo20 -- August \apo21}
{\bodytext{I was responsible for
\begin{itemize}
\item researching \& implementing superior algorithms for hierarchical \& non-hierarchical collective communication kernels.
\item developing collective communication kernels for low-precision data-types (bfloat16, fp16, etc...).
\end{itemize}
}}
\headedsubsection
{post-Si Performance Validation}
{October \apo20 -- August \apo21}
{\bodytext{My contributions included
\begin{itemize}
\item developing performance validation tests for scale-up \& scale-out.
\item working with the performance validation team to understand the observed performance behavior of new hardware.
\end{itemize}
}}
}
\headedsection % sets the header for the section and includes any subsections
{\href{https://wavecomp.ai/}{Wave Computing - Senior Staff Research \& Development Software Engineer}}
{\textsc{Campbell, CA}} {
\bodytext{Wave Computing was developing the next-generation of solutions for speeding up Deep Learning applications using Dataflow Processing Units (DPUs), which contain thousands of interconnected dataflow Processing Elements (PEs). DPUs were meant to power Wave Computing's custom appliance for developing, testing, and deploying Deep Learning models. I developed the compute- and data movement- software kernels which were meant to be executed by Wave Computing's Dataflow Processing Units (DPUs) for Deep Learning acceleration.}
\headedsubsection
{Deep Learning Kernel- \& Library-Development}
{Dec \apo17 -- April \apo19}
{\bodytext{I have worked extensively on compute- \& communicate-kernels, tools, and the supporting library for enabling Deep Learning workloads on Wave's DPUs.
\begin{itemize}
\item Developed compute kernels for various Deep Learning layers such as Average Pooling, Convolutions, Feed-Forward, Activations, Concatenation \& fork, etc...
\item Developed a tool for visualizing the place \& route performed by JitPR.
\item Owner of the library of routines for performing IEEE 754 rounding.
\item Owner of the library of routines for changing precision.
\item Authored various block Matrix Multiplication operations.
\item Authored various multi-byte addition operations.
\item Authored various multi-byte shift operations.
\end{itemize}
}}
\headedsubsection
{Tools Development}
{Nov \apo18 -- Apr \apo19}
{\bodytext{I have developed tools for
\begin{itemize}
\item Simulating (functional \& performance) Deep Learning kernels.
\item Visualizing the placement \& routing performed by JitPR.
\end{itemize}
}}
\headedsubsection
{Management Role}
{March \apo18 -- April \apo19}
{\bodytext{I assisted my superior with the management of the kernel team consisting of seven engineers. My duties included
\begin{itemize}
\item maintaining the schedule of work being done by the members of the team.
\item assisting in the planning of new work items.
\item identifying \& interviewing new compute-kernel team candidates.
\item report on the progress of the compute-kernel team to the VP of product engineering.
\end{itemize}
Major accomplishments include
\begin{itemize}
\item I developed a workflow for creating new compute-kernels.
\item I integrated the schedule of work into a JIRA managed project.
\end{itemize}
The outcome of my efforts is better project management leading to a significant reduction in the time taken to develop new compute-kernels.
}}
}
\headedsection % sets the header for the section and includes any subsections
{\href{https://colfaxresearch.com/}{Colfax International - High Performance Computing (HPC) Research Engineer}}
{\textsc{Sunnyvale, CA}} {%
\headedsubsection
{HPC Computational Fluid Dynamics (CFD) Application Development}
{Mar \apo17 -- Dec \apo17}
{\bodytext{HPC consulting project. I parallelized a CFD simulation code written in C for a client in the oil \& gas sector resulting in a $\sim$ 10X speedup. I have also refactored the client's codebase to enable independent time-evolution in different regions of the simulation via domain-decomposition methods.}}
\headedsubsection
{C/C++ Compiler Analysis}
{Nov \apo17}
{\bodytext{HPC research project. I investigated suitability of C++ compilers for HPC applications. I developed optimized scientific computational kernels and investigated the performance obtained by each compiler from each kernel. I analyzed compiled binary code to determine reason for differences in performance. A technical report of my findings can be obtained at \href{https://colfaxresearch.com/compiler-comparison/}{Colfax Research}.}}
\headedsubsection
{Intel Advisor Lecture}
{May \apo17 -- Jul \apo17}
{\bodytext{I presented a lecture on \href{https://software.intel.com/en-us/advisor}{Intel Advisor} for the Stanford University course ME 344: Introduction to High Performance Computing on July 20th, 2017.}}
}
\roottitle{Research Experience in Academia (2002 to March, 2017)}
\headedsection % sets the header for the section and includes any subsections
{\href{http://dm.lsst.org/}{Large Synoptic Survey Telescope (LSST) Data Management (Princeton University)}}
{\textsc{Princeton, NJ}} {%
\headedsubsection
{Postdoctoral Research Associate}
{Sept \apo15 -- Feb \apo17}
{\bodytext{LSST Data Management is building a \CPP \ \& Python software stack to analyze raw imaging data from LSST. I worked on the software stack to add functionality, documentation, \& tests. I developed \& implemented algorithm to propagate covariance when stacking images and worked on techniques for optimal image stacking \& differential chromatic refraction. I worked on a machine-learning based star-galaxy classifier and on converting the LSST stack to use {\sffamily py.test}. }}
}
\headedsection % sets the header for a subsection and contains usually body text
{\href{http://www.physics.upenn.edu/}{Department of Physics \& Astronomy (University of Pennsylvania)}}
{\textsc{Philadelphia, PA}} {%
\headedsubsection
{Postdoctoral Researcher}
{Sep \apo15 -- Feb \apo17}
{\bodytext{I developed and implemented a parallelized Bayesian algorithm to estimate orbital parameters from stochastic light curves of binary supermassive black holes. I also developed and implemented Python framework to automatically wrangle astronomical time-series data from a variety of sources including web-servers, SQL servers, data servers and local data files.}}
\headedsubsection
{Principle Developer}
{Sept \apo15 -- Feb \apo17}
{\bodytext{I architected and implemented \href{https://github.com/AstroVPK/kali}{\textsc{k\={a}l\={i}}}, an open-source high performance library to model stochastic time-series data in a Bayesian framework. \href{https://github.com/AstroVPK/kali}{\textsc{k\={a}l\={i}}} is capable of modeling time-series data as variants of C-ARMA processes (a type of Gaussian random process). Written primarily in \CPP and exposed to Python using Cython, \href{https://github.com/AstroVPK/kali}{\textsc{k\={a}l\={i}}} uses {\sffamily scikit-learn} for machine learning, Intel MKL for fast linear algebra, Intel Bull Mountain technology for hardware random number generation, \& OpenMP 4.0 for vectorization \& parallelization. \href{https://github.com/AstroVPK/kali}{\textsc{k\={a}l\={i}}} is being used to study astronomical time-series data by multiple research groups at Caltech, UPenn, \& Drexel.}}
}
\headedsection
{\href{http://drexel.edu/coas/academics/departments-centers/physics/}{Department of Physics (Drexel University)}}
{\textsc{Philadelphia, PA}} {%
\headedsubsection
{AGN Variability Analysis}
{June \apo09 -- Aug \apo15}
{\bodytext{I developed \CPP sofwtare for Intel Xeon Phi accelerator cards to model AGN variability. I developed vectorized \& parallelized the \CPP \ pipeline to forward-model and fit data to model using MLE of $2^{\textrm{nd}}$-order statistics.}}
\headedsubsection
{LSST Photo-z Analysis}
{Sept \apo08 -- May \apo09}
{\bodytext{I used MLE \& machine learning (neural networks) to establish optimal y-band filter for LSST galaxy photo-z distance estimation.}}
}
\headedsection
{\href{https://physics.vcu.edu/}{Department of Physics (Virginia Commonwealth University)}}
{\textsc{Richmond, VA}} {%
\headedsubsection
{Adjunct Instructor}
{Jun \apo07 -- Aug \apo08}
{\bodytext{I taught {\it Introduction to Astronomy} course.}}
\headedsubsection
{AFM Image Analysis}
{Aug \apo05 -- May \apo07}
{\bodytext{I implemented an {\large \sc idl} pipeline to analyze AFM images of silicon surfaces etched using oxygen.}}
}
\headedsection % sets the header for a subsection and contains usually body text
{\href{http://physics.richmond.edu/}{Department of Physics (University of Richmond)}}
{\textsc{Richmond, VA}} {%
\headedsubsection
{Cosmic Microwave Background Analysis}
{May \apo03 -- May \apo05}
{\bodytext{I used {\large \sc idl} to perform statistical tests of the utility of the bispectrum for detection of non-Gaussianity in the CMB.}}
}
\spacedhrule{2.0em}{0.2em}
\roottitle{Education}
\headedsection
{\href{http://drexel.edu/}{Drexel University}}
{\textsc{Philadelphia, PA}} {%
\headedsubsection
{PhD. in Physics}
{2008 -- 2015}
{\bodytext{Probing AGN Accretion Physics through AGN Variability: Insights from {\it Kepler}}}
}
\headedsection
{\href{http://www.vcu.edu/}{Virginia Commonwealth University}}
{\textsc{Richmond, VA}} {%
\headedsubsection
{M.S in Physics \& Applied Physics}
{2005 -- 2007}
{\bodytext{CAFM Studies of Epitaxial Lateral Overgrowth GaN Films}}
}
\headedsection
{\href{http://www.richmond.edu/}{University of Richmond}}
{\textsc{Richmond, VA}} {%
\headedsubsection
{B.S. in Mathematics \& Physics}
{2001 -- 2005}
{\bodytext{The Bispectrum as a Quantifier of non-Gaussianity in the Cosmic Microwave Background}}
}
\spacedhrule{2.0em}{0.2em}
\roottitle{Certifications}
\headedsection
{\href{https://www.deeplearning.ai/}{deeplearning.ai}}
{\href{https://www.coursera.org/}{\textsc{coursera.org}}} {
\headedsubsection
{\href{https://www.coursera.org/account/accomplishments/verify/TWG6QF3MVDHG}{Custom Models, Layers, and Loss Functions with TensorFlow}}
{\bodytext{Certificate earned on December 17, 2020.}}
\headedsubsection
{\href{https://www.coursera.org/account/accomplishments/specialization/9WCEB5PYL6VT}{Deep Learning Specialization}}
{\bodytext{Certificate earned on June 24, 2018.}}
\headedsubsection
{\href{https://www.coursera.org/account/accomplishments/verify/RE53JG7A9ZKV}{Sequence Models}}
{\bodytext{Certificate earned on June 24, 2018.}}
\headedsubsection
{\href{https://www.coursera.org/account/accomplishments/verify/GNUR9KQXTNPS}{Convolutional Neural Networks}}
{\bodytext{Certificate earned on January 22, 2018.}}
\headedsubsection
{\href{https://www.coursera.org/account/accomplishments/verify/UXZBYZQKAEU9}{Structuring Machine Learning Projects}}
{\bodytext{Certificate earned on December 17, 2017.}}
\headedsubsection
{\href{https://www.coursera.org/account/accomplishments/verify/8FCH622WF97A}{Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization}}
{\bodytext{Certificate earned on November 26, 2017.}}
\headedsubsection
{\href{https://www.coursera.org/account/accomplishments/verify/JSDREA8QF6XG}{Neural Networks and Deep Learning}}
{\bodytext{Certificate earned on November 4, 2017.}}
}
\spacedhrule{2.0em}{0.2em}
\roottitle{Skills}
\inlineheadsection % special section that has an inline header with a 'hanging' paragraph
{Technical expertise:}
{My expertise lies in the research, design \& implementation of high-performance scientific/numerical software on exotic, highly-parallel, and distributed hardware. Current interests include hardware-software co-design, particularly in the area of AI accelerators. I earned my doctorate in Astrophysics by applying mathematical \& statistical analysis and machine learning to complex time-domain data. While I am familiar with developing Deep Learning applications using TensorFlow \& PyTorch, my specialty lies in knowledge of the innards of these frameworks \& their execution on various accelerators such as GPUs, DPUs, etc.... Used to working with(in) teams, I am fond of using Agile methodologies (Scrum) and continuous integration (Jenkins) to manage \& deliver on-time \& bug-free software. I enjoy writing C-For-Metal, OpenCL, FlowGraph, C\nsp, \CPP\nsp, \& Python, and am learning Julia/\nsp CUDA/\nsp Go/\nsp Io/\nsp Prolog. I have excellent knowledge of parallelization technologies:\ OpenMP, MPI, \CPP11 threads, \& POSIX threads, \& hardware architecture. Over the years, I have gained extensive experience programming on novel computing platforms such Intel's Xe-HPC accelerators , Wave Computing's Dataflow Processing Units (DPU) \& Intel's Xeon Phi (Knight's Corner \& Knight's Landing). I have extensive knowledge \& experience with various programming toolchains including the Intel toolchain, GNU toolchain, LLVM toolchain, PGI toolchain, AOCC toolchain, Valgrind, gdb, make, SCons etc.... My preferred development platform is Linux ($\geq$ 15 years of development experience) although I have also developed on Windows (1 year of dev-ex), \& Mac OSX (7 years of de-ex). I am very comfortable with writing in \LaTeX \ ($\geq$ 15 years of experience) and have a good understanding of the UNIX programming environment and tools (memory-management, process spawning, etc...)}
\vspace{1.0em}
\inlineheadsection
{Public speaking:}
{With years of experience delivering highly technical talks to both expert \& general audiences, I am comfortable with public speaking \& outreach.}
\vspace{1.0em}
\inlineheadsection
{Natural languages:}
{English \emph{(native language)} and Hindi \emph{(native language)}.}
\spacedhrule{2.0em}{0.2em}
\roottitle{Service}
\inlineheadsection
{\href{https://www.mdpi.com/journal/particles/submission_reviewers}{Particles}}{Member of the Reviewer Board of the journal \href{https://www.mdpi.com/journal/particles}{Particles} published by MDPI.}
\inlineheadsection
{\href{https://www.nsf.gov/}{The National Science Foundation}}{Served on a grant review panel for the Division of Astronomical Sciences.}
\inlineheadsection
{\href{http://iopscience.iop.org/journal/0004-637X}{The Astrophysical Journal}}{Peer reviewed publications.}
\inlineheadsection
{\href{https://iopscience.iop.org/journal/1538-3881}{The Astronomical Journal}}{Peer reviewed publications.}
\inlineheadsection
{\href{https://academic.oup.com/mnras}{Monthly Notices of the Royal Astronomical Society}}{Peer reviewed publications.}
\spacedhrule{2.0em}{0.2em}
%\pagebreak
\roottitle{Publications}
\inlineheadsection
{A Performance-Based Comparison of C/C++ Compilers}{\href{https://colfaxresearch.com/compiler-comparison/}{Colfax Research}, 2017}
\inlineheadsection
{Science-driven Optimization of the LSST Observing Strategy}{\href{https://arxiv.org/abs/1708.04058}{arXiV}, 2017}
\inlineheadsection
{Large Synoptic Survey Telescope Galaxies Science Roadmap}{\href{https://arxiv.org/abs/1708.01617}{arXiV}, 2017}
\inlineheadsection
{Extracting Information from AGN Variability}{\href{https://doi.org/10.1093/mnras/stx1420}{MNRAS, 470, 3, 3027-3048}, 2017}
\inlineheadsection
{The LSST Data Management System}{\href{http://adsabs.harvard.edu/cgi-bin/bib_query?arXiv:1512.07914}{Proceedings of ADASS XXV}, 2015}
\inlineheadsection
{Do the Kepler AGN light curves need reprocessing?}{\href{http://dx.doi.org/10.1093/mnras/stv1797}{MNRAS, 453, 2075}, 2015}
\inlineheadsection
{Are the variability properties of the Kepler AGN light curves consistent with a damped random walk?}{\href{http://dx.doi.org/ 10.1093/mnras/stv1230}{MNRAS, 451, 4328}, 2015}
\inlineheadsection
{Thirty Meter Telescope Detailed Science Case: 2015}{\href{http://arxiv.org/abs/1505.01195}{http://arxiv.org/abs/1505.01195}, 2015}
\inlineheadsection
{AFM and CAFM studies of ELO GaN films}{\href{http://dx.doi.org/10.1117/12.706773}{Proc. SPIE 6473, 647308}, 2007}
\inlineheadsection
{Local electronic and optical behaviors of a-plane GaN grown via epitaxial lateral overgrowrth}{\href{http://dx.doi.org/10.1063/1.2429901}{Appl. Phys. Lett., 90, 011913}, 2007}
% \pagebreak
\spacedhrule{2.0em}{0.2em}
\roottitle{Grants}
\inlineheadsection
{Kepler Guest Observer Program}{Co-Investigator on Kepler Guest Observer Program accepted proposals K2 GO16088, K2 GO14088, K2 GO12013, K2 GO8052, \& K2 GO10052}
\inlineheadsection
{NASA Grant NNX14AL56G}{Helped write proposal for awarded NASA Grant NNX14AL56G. Grant was used to fund my Ph.D. research.}
\spacedhrule{2.0em}{0.2em}
\roottitle{Presentations}
\inlineheadsection
{\href{https://www.youtube.com/watch?v=lVLn9am93Qs}{Webinar on Intel \& A Day in the Life of a Deep Learning Software Engineer}}{HiCounselor, September 24th, 2019, Sunnyvale, CA}
\inlineheadsection
{Applications of High Performance Computing in Artificial Intelligence}{Rajasthan Student Startup Exposure Program 2018 (RSSEP2018), September 8th, 2018, San Jose, CA}
\inlineheadsection
{\href{https://software.intel.com/en-us/advisor}{Intel Advisor}}{Stanford University ME344: Introduction to High Performance Computing, July 20th, 2017, Stanford, CA}
\inlineheadsection
{Optical Variability Signatures from Massive Black Hole Binaries}{229$^{\mathrm{th}}$ Meeting of the American Astronomical Society, 2017, Grapevine, TX}
\inlineheadsection
{Extracting Information From AGN Variability: an LSST AGN Collaboration Proposal}{2017 LSST AGN Science Collaboration Roadmap Development Meeting, 2017, Grapevine, TX}
\inlineheadsection
{Extracting Information from AGN Variability}{2016 KARL LSST Workshop, November 2016, Louisville, KY.}
\inlineheadsection
{Surveying the Dynamic Sky with the LSST}{2016 KARL LSST Workshop, November 2016, Louisville, KY.}
\inlineheadsection
{AGN Variability: Insights from Kepler}{2016 Hotwiring the Transient Universe V Meeting, October 2016, Villanova, PA.}
\inlineheadsection
{Probing Accretion Processes through Variability}{2016 TMT Science Forum `International Partnership for Global Astronomy', May 2016, Kyoto, Japan.}
\inlineheadsection
{AGN Variability: Insights from Kepler}{Princeton HSC Science Discussion Series, March 2016, Princeton, NJ.}
\inlineheadsection
{AGN Variability on Short Timescales: What does Kepler tell us about AGN Variability?}{2015 TMT Science Forum `Maximizing Transformative Science with TMT', June 2015, Washington, DC.}
\inlineheadsection
{What can Kepler tell us about AGN variability?}{225th Meeting of the American Astronomical Society, January 2015, Seattle, WA.}
\inlineheadsection
{Do Kepler AGN Light Curves Exhibit a Damped Random Walk?}{24th Meeting of the American Astronomical Society, June 2014, Boston, MA.}
\inlineheadsection
{The Bispectrum of Galactic Dust: Implications for Microwave Background non-Gaussianity}{204th Meeting of the American Astronomical Society, May 2004, Denver, CO.}
\spacedhrule{2.0em}{0.2em}
\roottitle{Conferences}
\inlineheadsection
{HotChips 2020 on behalf of Intel Corporation}{August 16th - 18th, 2020 San Jose, CA}
\inlineheadsection
{The Next AI Platform 2020 on behalf of Intel Corporation}{June 25th, 2020 San Jose, CA}
\inlineheadsection
{The Next FPGA Platform 2020 on behalf of Intel Corporation}{Jan 22nd, 2020 San Jose, CA}
\inlineheadsection
{The Next AI Platform 2019 on behalf of Intel Corporation}{May 9th, 2019 San Jose, CA}
\inlineheadsection
{Super Computing 17 (SC17) on behalf of Colfax International}{November 12th - 17th, 2017 Denver, CO}
\end{document}