Biresh Kumar Joardar

Department of Electrical and Computer Engineering, Duke University, Durham, NC

I am a Postdoc (CI-Fellow) working with my mentor Dr. Krishnendu Chakrabarty at Duke University. Before starting at Duke, I was a PhD candiate at Washington State University where I was advised by Dr. Partha Pratim Pande and Dr. Janardhan Rao Doppa. I completed my undergraduate at the Department of Electronics and Telecommunications Engineering, Jadavpur University, India. Apart from research, I enjoy being outdoor, especially playing football (soccer) and badminton, going on long drives, and working out at the gym. I also spend a lot of time reading about a variety of topics (Did you know Liquid Helium can climb walls!).

I'm currently working on "Machine Learning for Machine Learning" (ML for ML). The aim of this project is to enable a virtuous cycle of Machine Learning-based hardware design to further empower advances in Machine Learning. The project has two parts: (a) ML for hardware design: ML can be used to improve the hardware design process at several stages starting from netlist optimization to late-stage testing and verification. and (b) Hardware Design for ML applications: As ML algorithms become increasingly complex, new architectures to enable them are needed. Both these aspects are two sides of the same coin and need to be investigated parallelly. Over the next few years, I will explore how ML and hardware design can benefit each other. I would love to hear from, learn and collaborate with other researchers working in these domains.

My research interests include:
•   Reliable and Fault tolerant architectures
•   Heterogeneous manycore systems
•   Machine learning (including Deep Learning)
•   High-performance NoC design
•   Non-Volatile memories
•   Emerging 3D technologies


You can find my CV here

Education

Duke University

Postdoc (CI-Fellow)
Department of Electrical and Computer Engineering
September 2020 - present

Washington State University

PhD candidate
Major: Computer Engineering
Minor: Computer Science
Relevant Coursework:
Machine Learning, Structured Prediction, VLSI Design, Computer Architecture
GPA: 4.0 (max: 4.0)

Dissertation: MACHINE LEARNING-ENABLED VERTICALLY INTEGRATED HETEROGENEOUS MANYCORE SYSTEMS FOR BIG-DATA ANALYTICS

August 2016 - August 2020

Jadavpur University

Bachelor of Engineering
Electronics and Telecommunication Engineering

GPA: 9.1 (max: 10.0)

August 2012 - May 2016

S.E. Rly. Mixed H.S. (E.M.) School

Marks Percentage (Class 10): 94 (Max: 100)

Marks Percentage (Class 12): 92 (Max: 100)

April 2000 - April 2012

Research

Click to expand each of the works

The availability of different core architectures (CPUs, GPUs, NVMs, FPGAs, etc.) and interconnection technologies (e.g., TSV-based stacking, M3D, Photonics, Wireless etc.) has revolutionized high-performance hardware design. However, the resulting diversity in the choice of hardware has made the design, evaluation, and testing of new architectures an increasingly challenging problem. Each computation/communication element has its unique set of requirements that need to be satisfied simultaneously for overall power, performance and area benefits. Existing heuristic-based solutions are not scalable and often lead to sub-optimal outcomes. ML techniques can be used here to solve this problem. By learning the design space of all possible solutions, ML can lead to better results much faster than traditional methods. This will reduce design time and lead to better architectures in future.
Relevant publications:

  1. B. K. Joardar, R. G. Kim, J. R. Doppa, P. P. Pande, D. Marculescu and R. Marculescu, “Learning-based Application-Agnostic 3D NoC Design for Heterogeneous Manycore Systems," in IEEE Transactions on Computers, vol. 68, no. 6, pp. 852-866, 2019
  2. A. Deshwal, N. K. Jayakodi, B. K. Joardar, J. R. Doppa, and P. P. Pande, “MOOS: A Multi-Objective Design Space Exploration and Optimization Framework for NoC Enabled Manycore Systems,” in ACM Transactions on Embed. Comput. Syst. 18, 5s, Article 77, 2019
  3. A. I. Arka, B. K. Joardar, R. G. Kim, D. H. Kim, J. R. Doppa and P. P. Pande, "HeM3D: Heterogeneous Manycore Architecture Based on Monolithic 3D Vertical Integration," in ACM Transactions on Des. Autom. Electron. Syst. 2020

ML has become ubiquitious in real life with applications in healthcare, recommendation systems, self-driving cars, etc. However, ML algorithms (particularly Deep Learning) is computationally challenging from the perspective of hardware. General purpose cores like CPU and GPU are not optimized for these applications, leading to sub-optimal performance. Some of the most notable limitations of existing architectures are: (a) high area requirements, (b) relatively low performance per watt, and (c) limited memory bandwidth. Hence, new architectures with domain-specific customizations are necessary for accelerating ML applications.
Relevant publications:

  1. B. K. Joardar, J. R. Doppa, P. P. Pande, H. Li and K. Chakrabarty, "AccuReD: High Accuracy Training of CNNs on ReRAM/GPU Heterogeneous 3D Architecture," in IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2020
  2. B. K. Joardar, W. Choi, R. G. Kim, J. R. Doppa, P. P. Pande, D. Marculescu and R. Marculescu, "3D NoC-Enabled Heterogeneous Manycore Architectures for Accelerating CNN Training: Performance and Thermal Trade-offs," In Proceedings of the Eleventh IEEE/ACM International Symposium on Networks-on-Chip (NOCS '17), New York, NY, USA, 2017, Article 18

New and emerging technologies like Processing-in-memory, Monolithic 3D and Non-volatile memory promise significant power-performance benefits for ML applications. However, due to immature fabrication technologies, they are prone to failures and other non-ideal effects. This makes these architectures highly unreliable despite their advantages. Hence, it is important to develop error-tolerant ML applications that can deliver equal prediction accuracy even when the underlying hardware is faulty/unreliable. This will not only promote the widespread adoption of these new and emerging technologies but also lead to faster ML applications.
Relevant publications:

  1. B. K. Joardar, J. R. Doppa, P. P. Pande, H. Li and K. Chakrabarty, "AccuReD: High Accuracy Training of CNNs on ReRAM/GPU Heterogeneous 3D Architecture," in IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2020

Publications

My works have been published in prestigious journals including TC, TCAD, TODAES, TECS and conferences including DATE, ICCAD, NOCS and CODES. Few more are currently under revision (hence, not listed here currently). One of my papers received the Best Paper Award (at NOCS 2019) with another nominated for Best Paper Award (at DATE 2020) Following are my current publications in chronological order:

Journal Publications

  1. A. I. Arka, B. K. Joardar, R. G. Kim, D. H. Kim, J. R. Doppa and P. P. Pande, "HeM3D: Heterogeneous Manycore Architecture Based on Monolithic 3D Vertical Integration," in ACM Transactions on Des. Autom. Electron. Syst. 2020
  2. B. K. Joardar, J. R. Doppa, P. P. Pande, H. Li and K. Chakrabarty, "AccuReD: High Accuracy Training of CNNs on ReRAM/GPU Heterogeneous 3D Architecture," in IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2020
  3. A. Deshwal, N. K. Jayakodi, B. K. Joardar, J. R. Doppa, and P. P. Pande, “MOOS: A Multi-Objective Design Space Exploration and Optimization Framework for NoC Enabled Manycore Systems,” in ACM Transactions on Embed. Comput. Syst. 18, 5s, Article 77, 2019
  4. B. K. Joardar, R. G. Kim, J. R. Doppa, P. P. Pande, D. Marculescu and R. Marculescu, “Learning-based Application-Agnostic 3D NoC Design for Heterogeneous Manycore Systems," in IEEE Transactions on Computers, vol. 68, no. 6, pp. 852-866, 2019

Conference Publications

  1. A. I. Arka, B. K. Joardar, J. R. Doppa, P. P. Pande and K. Chakrabarty, “ReGraphX: NoC-enabled 3D Heterogeneous ReRAM Architecture for Training Graph Neural Networks,” to be presented at Design, Automation & Test in Europe Conference & Exhibition (DATE), Grenoble, France, 2021 (Best Paper Nomination).
  2. B. K. Joardar, N. K. Jayakodi, J. R. Doppa, H. Li, P. P. Pande and K. Chakrabarty, “GRAMARCH: A GPU-ReRAM based Heterogeneous Architecture for Neural Image Segmentation,” in Proceedings of 23rd IEEE/ACM Design, Automation & Test in Europe Conference & Exhibition (DATE), Grenoble, France, 2020 (Best Paper Nomination).
  3. B. K. Joardar, P. Ghosh, P. P. Pande, A. Kalyanaraman and S. Krishnamoorthy, “NoC-enabled Software/Hardware Co-Design Framework for Accelerating k-mer Counting,” International Symposium on Networks-on-Chip (NOCS '19), New York, NY, USA, 2019 (Best Paper Award).
  4. P. Bogdan, F. Chen, A. Deshwal, J. R. Doppa, B. K. Joardar, H. Li, S. Nazarian, L. Song, and Y. Xiao, “Taming extreme heterogeneity via machine learning based design of autonomous manycore systems,” in Proceedings of the International Conference on Hardware/Software Codesign and System Synthesis Companion (CODES/ISSS), 2019
  5. B. K. Joardar, A. Deshwal, J. R. Doppa and P. P. Pande, "A Machine Learning Framework for Multi-Objective Design Space Exploration and Optimization of Manycore Systems," 2019 ACM/IEEE 1st Workshop on Machine Learning for CAD (MLCAD), Canmore, AB, Canada, 2019, pp. 1-6
  6. B. K. Joardar, R. G. Kim, J. R. Doppa, P. P. Pande, “Design and Optimization of Heterogeneous Manycore Systems enabled by Emerging Interconnect Technologies: Promises and Challenges,” Proceedings of 22nd IEEE/ACM Design, Automation & Test in Europe Conference & Exhibition (DATE), Florence, Italy, 2019
  7. B. K. Joardar, B. Li, J. R. Doppa, H. Li, P. P. Pande and K. Chakrabarty, “REGENT: A Heterogeneous ReRAM/GPU-based Architecture Enabled by NoC for Training CNNs,” Proceedings of 22nd IEEE/ACM Design, Automation & Test in Europe Conference & Exhibition (DATE), Florence, Italy, 2019
  8. B. K. Joardar, J. R. Doppa, P. P. Pande, D. Marculescu and R. Marculescu, “Hybrid On-Chip Communication Architectures for Heterogeneous Manycore Systems,” Proceedings of 37th IEEE/ACM International Conference on Computer-Aided Design (ICCAD), 2018
  9. B. K. Joardar, K. Duraisamy and P. P. Pande, "High performance collective communication-aware 3D Network-on-Chip architectures," 2018 Design, Automation & Test in Europe Conference & Exhibition (DATE), Dresden, Germany, 2018, pp. 1351-1356
  10. B. K. Joardar, W. Choi, R. G. Kim, J. R. Doppa, P. P. Pande, D. Marculescu and R. Marculescu, "3D NoC-Enabled Heterogeneous Manycore Architectures for Accelerating CNN Training: Performance and Thermal Trade-offs," In Proceedings of the Eleventh IEEE/ACM International Symposium on Networks-on-Chip (NOCS '17), New York, NY, USA, 2017, Article 18
  11. S. Das, S. Chatterjee, B. K. Joardar, A. Mukherjee, and M. K. Naskar, "Physical channel modeling by calcium signaling in molecular communication based nanonetwork," In Proceedings of the 10th EAI International Conference on Body Area Networks (BodyNets ’15), Brussels, Belgium, pp. 71–77, 2015

A complete list of my works can also be found on my google scholar page

Additional information

Internship experiences

  1. Cadence Design Systems (2019): Using Machine Learning to accelerate/improve the RTL synthesis process
  2. IIT Kharagpur (2015): Studying apneic conditions using Photo-Plethysmography (PPG)

In the news

  1. Memo from the Dean at Duke University, September 24, 2020
  2. Graduate student receives Computing Innovation Fellowship in WSU Insider, July 28, 2020
  3. Speeding up machine learning in WSU Insider, March 16, 2020

Awards and Achievements

  1. Featured on the WSU EECS webpage (for receiving the CI-Fellowship), August, 2020
  2. Received the CI-Fellowship, Awarded by CRA and CCC, 2020
  3. Nominated for the Best Paper Award, DATE 2020
  4. Won the Best Paper Award, NOCS 2019
  5. Outstanding Graduate Student Researcher award, Voiland College of Engineering and Architecture, 2019
  6. Harold and Dianna Frank Electrical Engineering Fellowship, Washington State University, 2018
  7. Won 2nd position in programming contest 'Algomaniac', Jadavpur University, India
  8. Selected for KVPY Scholarship (National Rank: 670), India
  9. CPO’s Best Student Award, South Eastern Railway zone, India

Programing skills

  1. C/C++: Implemented numerous code snippets related to my research and worked with existing simulation environments
  2. Python: Implemented numerous code snippets related to my research including Deep Learning applications
  3. Java: Some basic knowledge; Have worked with the popular Machine Learning implementation Weka

Contact Information

Please feel free to contact me via email: biresh (dot) joardar (at) wsu (dot) edu