Biresh Kumar Joardar

NSF Computing Innovation (Postdoctoral) Fellow
Department of Electrical and Computer Engineering, Duke University, Durham, NC

I'm looking for a job as a tenure track faculty member in 2021-2022

I am a Postdoc (CI-Fellow) working with my mentor Dr. Krishnendu Chakrabarty at Duke University. Before starting at Duke, I was a PhD candidate at Washington State University where I was advised by Dr. Partha Pratim Pande and Dr. Janardhan Rao Doppa. I completed my undergraduate at the Department of Electronics and Telecommunications Engineering, Jadavpur University, India. Apart from research, I enjoy being outdoor, especially playing football (soccer) and badminton, and working out at the gym. I also spend a lot of time reading about a variety of topics (Did you know Liquid Helium can climb walls!).
myResearch
I'm currently working on "Reliable Machine Learning using Unreliable Hardware". Fig. 1 explains my research problem. Emerging technologies such as 3D integration and Resistive Random Access Memory (ReRAM) promise significant speed-up for machine learning (ML). However, due to relatively immature fabrication process, architectures based on these technologies are not reliable. This makes training and inferencing of ML algorithms challenging as weights are misrepresented when they are stored on faulty hardware. Next, noise affects various deep learning algorithms differently. In Graph Neural Networks (GNNs), the recursive message passing mechanism leads to repeated accumulation of noise. Similarly, in Recurrent Neural Networks (RNNs), noise is accumulated multiple times due to the presence of loops. My research aims to develop novel and inexpensive techniques to solve these problems. I would love to hear from, learn and collaborate with other researchers working in these domains.


Professional experiences


  1. NSF Computing Innovation (Postodoctoral) Fellow September 2020 - Present
    Mentor: Dr. Krishnendu Chakrabarty
    Duke University
  2. Cadence Design Systems: May 2019 - August 2019
    Project: Using Machine Learning to accelerate/improve the RTL synthesis process
  3. IIT Kharagpur: May 2015 - August 2015
    Mentor: Dr. Saswat Chakrabarti
    Project: Studying apneic conditions using Photo-Plethysmography (PPG)

My research interests include:
•   Reliable and Fault tolerant architectures
•   Heterogeneous manycore systems
•   Machine learning (including Deep Learning)
•   High-performance NoC design
•   Non-Volatile memories
•   Emerging 3D technologies

You can find my CV here

Education

Washington State University

PhD candidate
Major: Computer Engineering
Minor: Computer Science
Relevant Coursework:
Machine Learning, Structured Prediction, VLSI Design, Computer Architecture
GPA: 4.0 (max: 4.0)

Dissertation: Machine Learning-enabled vertically integrated heterogeneous manycore systems for big-data analytics

August 2016 - August 2020

Jadavpur University

Bachelor of Engineering
Electronics and Telecommunication Engineering

GPA: 9.1 (max: 10.0)

August 2012 - May 2016

Awards and Achievements

  1. Selected for the DAAD AInet Fellowship, Awarded by DAAD, 2021
  2. Received the CI-Fellowship, Awarded by CRA and CCC, 2020
  3. Nominated for the Best Paper Award, DATE 2020, DATE 2021
  4. Won the Best Paper Award, NOCS 2019
  5. Outstanding Graduate Student Researcher award, Voiland College of Engineering and Architecture, 2019
  6. Harold and Dianna Frank Electrical Engineering Fellowship, Washington State University, 2018
  7. Won 2nd position in programming contest 'Algomaniac', Jadavpur University, India
  8. Selected for KVPY Scholarship (National Rank: 670), India
  9. CPO’s Best Student Award, South Eastern Railway zone, India

Research

Click to expand each of the work

The availability of different core architectures (CPUs, GPUs, NVMs, FPGAs, etc.) and interconnection technologies (e.g., TSV-based stacking, M3D, Photonics, Wireless etc.) has revolutionized high-performance hardware design. However, the resulting diversity in the choice of hardware has made the design, evaluation, and testing of new architectures an increasingly challenging problem. Each computation/communication element has its unique set of requirements that need to be satisfied simultaneously for overall power, performance and area benefits. Existing heuristic-based solutions are not scalable and often lead to sub-optimal outcomes. ML techniques can be used here to solve this problem. By learning the design space of all possible solutions, ML can lead to better results much faster than traditional methods. This will reduce design time and lead to better architectures in future.
Relevant publications:

  1. B. K. Joardar, R. G. Kim, J. R. Doppa, P. P. Pande, D. Marculescu and R. Marculescu, “Learning-based Application-Agnostic 3D NoC Design for Heterogeneous Manycore Systems," in IEEE Transactions on Computers, vol. 68, no. 6, pp. 852-866, 2019
  2. A. Deshwal, N. K. Jayakodi, B. K. Joardar, J. R. Doppa, and P. P. Pande, “MOOS: A Multi-Objective Design Space Exploration and Optimization Framework for NoC Enabled Manycore Systems,” in ACM Transactions on Embed. Comput. Syst. 18, 5s, Article 77, 2019
  3. A. I. Arka, B. K. Joardar, R. G. Kim, D. H. Kim, J. R. Doppa and P. P. Pande, "HeM3D: Heterogeneous Manycore Architecture Based on Monolithic 3D Vertical Integration," in ACM Transactions on Des. Autom. Electron. Syst, 26, 2, Article 16, 2021
  4. B. K. Joardar, A. Deshwal, J. R. Doppa, P. P. Pande and K. Chakrabarty, "High-Throughput Training of Deep CNNs on ReRAM-based Heterogeneous Architectures via Optimized Normalization Layers," in IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2021

ML has become ubiquitious in real life with applications in healthcare, recommendation systems, self-driving cars, etc. However, ML algorithms (particularly Deep Learning) is computationally challenging from the perspective of hardware. General purpose cores like CPU and GPU are not optimized for these applications, leading to sub-optimal performance. Conventional ReRAM-based PIM architectures are promising but have several shortcomings, such as lack of normalization and high precision support. We can address these challenges using heterogeneous architectures such as AccuReD and ReGraphX (cited below).
Relevant publications:

  1. B. K. Joardar, J. R. Doppa, P. P. Pande, H. Li and K. Chakrabarty, "AccuReD: High Accuracy Training of CNNs on ReRAM/GPU Heterogeneous 3D Architecture," in IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2020
  2. A. I. Arka, B. K. Joardar, J. R. Doppa, P. P. Pande and K. Chakrabarty, “ReGraphX: NoC-enabled 3D Heterogeneous ReRAM Architecture for Training Graph Neural Networks,” in Design, Automation & Test in Europe Conference & Exhibition (DATE), Grenoble, France, 2021 (Best Paper Nomination).

New and emerging technologies like Processing-in-memory, 3D integration and ReRAMs promise significant power-performance benefits for ML applications. However, due to immature fabrication technologies, they are prone to failures and other non-ideal effects. This makes these architectures highly unreliable despite their advantages. Hence, it is important to develop error-tolerant ML applications that can deliver equal prediction accuracy even when the underlying hardware is faulty/unreliable. This will not only promote the widespread adoption of these new and emerging technologies but also lead to faster ML applications.
Relevant publications:

  1. B. K. Joardar, J. R. Doppa, P. P. Pande, H. Li and K. Chakrabarty, "AccuReD: High Accuracy Training of CNNs on ReRAM/GPU Heterogeneous 3D Architecture," in IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2020
  2. B. K. Joardar, J. R. Doppa, H. Li, K. Chakrabarty and P. P. Pande, “Learning to Train CNNs on Faulty ReRAM-based Manycore Accelerators,” in International Conference on Compilers, Architectures, and Synthesis for Embedded Systems (CASES), 2021.
  3. X. Yang, S. Belakaria, B. K. Joardar, H. Yang, J. R. Doppa, P. P. Pande, K. Chakrabarty and H. Li, “Multi-Objective Optimization of ReRAM Crossbars for Robust DNN Inferencing under Stochastic Noise,” in International Conference on Computer Aided Design (ICCAD), 2021.

Teaching & Mentoring

Tutorials, Lectures and Special Sessions

  1. Overcoming Moore's Law via Technology and Machine Learning-driven Manycore Systems at Embedded Systems Week Conference 2021 (with Dr. Janardhan Rao Doppa)

Teaching

I'm the sole instructor of the ECE 590-7061 course at Duke University in Fall 2021. This course incorporates state-of-the-art machine learning techniques to solve many hardware design problems and is aimed towards graduate students and senior undergraduate students with interest in hardware systems and machine learning. The adoption of machine learning by hardware design engineers is relatively new and is yet to reach the next generation of students. According to the Linkedln 2020 Emerging Jobs report, hiring growth for machine learning has grown 74% annually in the past 4 years. Education and hardware design industries are among the top three sectors driving this growth in hiring (besides software/IT sector). Therefore, there is an immediate demand of engineers who can apply machine learning algorithms to solve hardware design problems, both in academia and in the chip design industry. This course aims to address these needs and introduce students to how machine learning can be applied in hardware research.

Mentoring

I have mentored a diverse group of students from different countries over the last two years.
  1. Eduardo Ortega, PhD candidate (and Sloan scholar) at Duke University, Started in Fall 2021
  2. Chung-Hsuan Tung, PhD candidate at Duke University, Started in Fall 2021
  3. Aqeeb Iqbal Arka, PhD candiate at Washington State University, Started in Fall 2018
  4. Chukwufumnanya Ogbogu, PhD candidate at Washington State University, Started in Spring 2021
  5. Xian Sun, MS from Duke University, Graduated in Spring 2021

Publications

My work have been published in prestigious journals including TC, TCAD, TODAES, TECS and conferences including DATE, ICCAD, NOCS and CODES. Few more are currently under revision (hence, not listed here currently). One of my papers received the Best Paper Award (at NOCS 2019) with two other nominated for Best Paper Award (at DATE 2020 and DATE 2021) Following are my current publications in chronological order:

Journal Publications

  1. B. K. Joardar, J. R. Doppa, H. Li, K. Chakrabarty and P. P. Pande, “Learning to Train CNNs on Faulty ReRAM-based Manycore Accelerators,” in ACM Transactions on Embedded Computing Systems (TECS), 2021 (as part of ESWEEK 2021).
  2. B. K. Joardar, A. Deshwal, J. R. Doppa, P. P. Pande and K. Chakrabarty, "High-Throughput Training of Deep CNNs on ReRAM-based Heterogeneous Architectures via Optimized Normalization Layers," in IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2021
  3. A. I. Arka, B. K. Joardar, J. R. Doppa, P. P. Pande and K.Chakrabarty, "Performance and Accuracy Trade-offs for Training Graph Neural Networks on ReRAM-based Architectures," in IEEE Transactions on Very Large-Scale Integration (VLSI) Systems, 2021
  4. A. I. Arka, B. K. Joardar, R. G. Kim, D. H. Kim, J. R. Doppa and P. P. Pande, "HeM3D: Heterogeneous Manycore Architecture Based on Monolithic 3D Vertical Integration," in ACM Transactions on Des. Autom. Electron. Syst, 26, 2, Article 16, 2021
  5. B. K. Joardar, J. R. Doppa, P. P. Pande, H. Li and K. Chakrabarty, "AccuReD: High Accuracy Training of CNNs on ReRAM/GPU Heterogeneous 3D Architecture," in IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 40, no. 5, pp. 971-984, 2021
  6. A. Deshwal, N. K. Jayakodi, B. K. Joardar, J. R. Doppa, and P. P. Pande, “MOOS: A Multi-Objective Design Space Exploration and Optimization Framework for NoC Enabled Manycore Systems,” in ACM Transactions on Embed. Comput. Syst, 18, 5s, Article 77, 2019
  7. B. K. Joardar, R. G. Kim, J. R. Doppa, P. P. Pande, D. Marculescu and R. Marculescu, “Learning-based Application-Agnostic 3D NoC Design for Heterogeneous Manycore Systems," in IEEE Transactions on Computers, vol. 68, no. 6, pp. 852-866, 2019

Conference Publications

  1. B. K. Joardar, J. R. Doppa, P. P. Pande, H. Li and K. Chakrabarty, “Processing-in-Memory enabled Heterogeneous Manycore Architectures for Deep Learning: From CNNs to GNNs,” in International Conference on Computer Aided Design (ICCAD), 2021.
  2. A. I. Arka, B. K. Joardar, J. R. Doppa, P. P. Pande and K. Chakrabarty, “DARe: DropLayer-Aware Manycore ReRAM Architecture for Training Graph Neural Networks,” in International Conference on Computer Aided Design (ICCAD), 2021.
  3. X. Yang, S. Belakaria, B. K. Joardar, H. Yang, J. R. Doppa, P. P. Pande, K. Chakrabarty and H. Li, “Multi-Objective Optimization of ReRAM Crossbars for Robust DNN Inferencing under Stochastic Noise,” in International Conference on Computer Aided Design (ICCAD), 2021.
  4. B. K. Joardar, A. I. Arka, J. R. Doppa and P. P. Pande, “3D++: Unlocking the Next Generation of High-Performance and Energy-Efficient Architectures using M3D Integration,” in Design, Automation & Test in Europe Conference & Exhibition (DATE), Grenoble, France, 2021.
  5. A. I. Arka, B. K. Joardar, J. R. Doppa, P. P. Pande and K. Chakrabarty, “ReGraphX: NoC-enabled 3D Heterogeneous ReRAM Architecture for Training Graph Neural Networks,” in Design, Automation & Test in Europe Conference & Exhibition (DATE), Grenoble, France, 2021 (Best Paper Nomination).
  6. B. K. Joardar, N. K. Jayakodi, J. R. Doppa, H. Li, P. P. Pande and K. Chakrabarty, “GRAMARCH: A GPU-ReRAM based Heterogeneous Architecture for Neural Image Segmentation,” in Proceedings of 23rd IEEE/ACM Design, Automation & Test in Europe Conference & Exhibition (DATE), Grenoble, France, 2020 (Best Paper Nomination).
  7. B. K. Joardar, P. Ghosh, P. P. Pande, A. Kalyanaraman and S. Krishnamoorthy, “NoC-enabled Software/Hardware Co-Design Framework for Accelerating k-mer Counting,” International Symposium on Networks-on-Chip (NOCS '19), New York, NY, USA, 2019 (Best Paper Award).
  8. P. Bogdan, F. Chen, A. Deshwal, J. R. Doppa, B. K. Joardar, H. Li, S. Nazarian, L. Song, and Y. Xiao, “Taming extreme heterogeneity via machine learning based design of autonomous manycore systems,” in Proceedings of the International Conference on Hardware/Software Codesign and System Synthesis Companion (CODES/ISSS), 2019
  9. B. K. Joardar, A. Deshwal, J. R. Doppa and P. P. Pande, "A Machine Learning Framework for Multi-Objective Design Space Exploration and Optimization of Manycore Systems," 2019 ACM/IEEE 1st Workshop on Machine Learning for CAD (MLCAD), Canmore, AB, Canada, 2019, pp. 1-6
  10. B. K. Joardar, R. G. Kim, J. R. Doppa, P. P. Pande, “Design and Optimization of Heterogeneous Manycore Systems enabled by Emerging Interconnect Technologies: Promises and Challenges,” Proceedings of 22nd IEEE/ACM Design, Automation & Test in Europe Conference & Exhibition (DATE), Florence, Italy, 2019
  11. B. K. Joardar, B. Li, J. R. Doppa, H. Li, P. P. Pande and K. Chakrabarty, “REGENT: A Heterogeneous ReRAM/GPU-based Architecture Enabled by NoC for Training CNNs,” Proceedings of 22nd IEEE/ACM Design, Automation & Test in Europe Conference & Exhibition (DATE), Florence, Italy, 2019
  12. B. K. Joardar, J. R. Doppa, P. P. Pande, D. Marculescu and R. Marculescu, “Hybrid On-Chip Communication Architectures for Heterogeneous Manycore Systems,” Proceedings of 37th IEEE/ACM International Conference on Computer-Aided Design (ICCAD), 2018
  13. B. K. Joardar, K. Duraisamy and P. P. Pande, "High performance collective communication-aware 3D Network-on-Chip architectures," 2018 Design, Automation & Test in Europe Conference & Exhibition (DATE), Dresden, Germany, 2018, pp. 1351-1356
  14. B. K. Joardar, W. Choi, R. G. Kim, J. R. Doppa, P. P. Pande, D. Marculescu and R. Marculescu, "3D NoC-Enabled Heterogeneous Manycore Architectures for Accelerating CNN Training: Performance and Thermal Trade-offs," In Proceedings of the Eleventh IEEE/ACM International Symposium on Networks-on-Chip (NOCS '17), New York, NY, USA, 2017, Article 18
  15. S. Das, S. Chatterjee, B. K. Joardar, A. Mukherjee, and M. K. Naskar, "Physical channel modeling by calcium signaling in molecular communication based nanonetwork," In Proceedings of the 10th EAI International Conference on Body Area Networks (BodyNets ’15), Brussels, Belgium, pp. 71–77, 2015

A complete list of my work can also be found on my google scholar page

Additional

Presentations/Posters and Invited Talks

  1. University of California, Riverside, 2021, Host: Dr. Philip Brisk
  2. SRC annual review event, 2021 (Both poster and presentation)
  3. DATE 2020, Online event
  4. Cadence Design Systems, 2019, San Jose (Poster)
  5. DATE 2019, Florence, Italy
  6. NOCS 2019, New York, US
  7. DATE 2018, Dresden, Germany

Program Committee Member

  1. International Green and Sustainable Computing Conference (IGSC), 2021
  2. ACM/IEEE Workshop on Machine Learning for CAD (MLCAD), 2021

Reviewer

  1. IEEE Transactions on Computers (TC)
  2. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD)
  3. IEEE Transactions on Very Large-Scale Integrated Circuits and Systems (TVLSI)
  4. IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
  5. IEEE Design & Test (D&T)
  6. IEEE Journal on Emerging and Selected Topics in Circuits and Systems (JETCAS)
  7. IEEE Transactions on Automation Science and Engineering (TASE)
  8. IEEE Transactions on Circuits and Systems II (TCAS)
  9. ACM Journal on Emerging Technologies in Computing Systems (JETC)
  10. ACM Transactions on Architecture and Code Optimization (TACO)

In the news

  1. Machine Learning for Machine Learning in CIFellows Spotlight, May 3, 2021
  2. Researchers zero in on zeroes problem in WSU Insider, February 8, 2021
  3. Memo from the Dean at Duke University, September 24, 2020
  4. Graduate student receives Computing Innovation Fellowship in WSU Insider, July 28, 2020
  5. Speeding up machine learning in WSU Insider, March 16, 2020

Contact Information

Please feel free to contact me via email: bireshkumar (dot) joardar (at) duke (dot) edu