Skip to content

HINATASDK/AutoDMP

This branch is up to date with NVlabs/AutoDMP:main.

Folders and files

NameName
Last commit message
Last commit date

Latest commit

866a1cb · Jul 12, 2023
Aug 30, 2021
Jun 22, 2021
Apr 5, 2023
Mar 24, 2023
Mar 27, 2023
Mar 21, 2023
Nov 30, 2022
Jun 25, 2021
Mar 21, 2023
Aug 30, 2021
Jul 12, 2023
Apr 5, 2023
Jun 21, 2021
Nov 17, 2022
Nov 21, 2022
Mar 24, 2023
Mar 27, 2023
Nov 19, 2022
Nov 19, 2022

Repository files navigation

AutoDMP: Automated DREAMPlace-based Macro Placement

Built upon the GPU-accelerated global placer DREAMPlace and detailed placer ABCDPlace, AutoDMP adds simultaneous macro and standard cell placement enhancements and automatic parameter tuning based on multi-objective hyperparameter Bayesian optimization (MOBO).

  • Simultaneous Macro and Standard Cell Placement Animations
MemPool Group Ariane
MemPool Group Ariane

Publications

  • Anthony Agnesina, Puranjay Rajvanshi, Tian Yang, Geraldo Pradipta, Austin Jiao, Ben Keller, Brucek Khailany, and Haoxing Ren, "AutoDMP: Automated DREAMPlace-based Macro Placement", International Symposium on Physical Design (ISPD), Virtual Event, Mar 26-29, 2023 (preprint) (blog)

Dependency

  • DREAMPlace

    • Commit b8f87eec1f4ddab3ad50bbd43cc5f4ccb0072892
    • Other versions may also work, but not tested
  • GPU architecture compatibility 6.0 or later (Optional)

    • Code has been tested on GPUs with compute compatibility 8.0 on DGX A100 machine.

How to Build

You can build in two ways:

  • Build without Docker by following the instructions of the DREAMPlace build at README_DREAMPlace.md.
  • Use the provided Dockerfile to build an image with the required library dependencies.

How to Run Multi-Objective Bayesian Optimization

To run the test of multi-objective Bayesian optimization on NVDLA NanGate45, call:

./tuner/run_tuner.sh 1 1 test/nvdla_nangate45_51/configspace.json test/nvdla_nangate45_51/NV_NVDLA_partition_c.aux test/nvdla_nangate45_51/nvdla_ppa.json \"\" 20 2 0 0 10 ./tuner test/nvdla_nangate45_51/mobohb_log

This will run on the GPUs for 20 iterations with 2 parallel workers. The different settings for the Bayesian optimization can be found in tuner/run_tuner.sh. The easiest way to explore different search spaces is to modify tuner/configspace.json. You can also run in single-objective mode or modify the parameters of the kernel density estimators in tuner/tuner_train.py.

Physical Design Flow

The physical design flow requires RTL, Python, and Tcl files from the TILOS-MacroPlacement repository. Only the codes that we have added and modified are provided in scripts.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 43.5%
  • Python 35.6%
  • Cuda 17.9%
  • CMake 1.4%
  • Tcl 0.8%
  • C 0.5%
  • Other 0.3%