Skip to content

Code for the ICML 2024 paper 'GFlowNet Training by Policy Gradients'

Notifications You must be signed in to change notification settings

niupuhua1234/GFN-PG

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 

Repository files navigation

Python Version PyTorch Version


GFN-PG

Code for the ICML 2024 paper 'GFlowNet Training by Policy Gradients'

Clik here for downloading sEH dataset

The code is adapted from torchgfn but not compatible with it. Please make sure torchgfn is not installed in your Python environment when running the code, in case of some unexpected function importing.

Citation

If you find our code useful, please considering citing our paper in your publications. We provide a BibTeX entry below.

@InProceedings{pmlr-v235-niu24c,
  title = 	 {{GF}low{N}et Training by Policy Gradients},
  author =       {Niu, Puhua and Wu, Shili and Fan, Mingzhou and Qian, Xiaoning},
  booktitle = 	 {Proceedings of the 41st International Conference on Machine Learning},
  pages = 	 {38344--38380},
  year = 	 {2024},
  editor = 	 {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix},
  volume = 	 {235},
  series = 	 {Proceedings of Machine Learning Research},
  month = 	 {21--27 Jul},
  publisher =    {PMLR},
  pdf = 	 {https://raw.githubusercontent.com/mlresearch/v235/main/assets/niu24c/niu24c.pdf},
  url = 	 {https://proceedings.mlr.press/v235/niu24c.html},
  abstract = 	 {Generative Flow Networks (GFlowNets) have been shown effective to generate combinatorial objects with desired properties. We here propose a new GFlowNet training framework, with policy-dependent rewards, that bridges keeping flow balance of GFlowNets to optimizing the expected accumulated reward in traditional Reinforcement-Learning (RL). This enables the derivation of new policy-based GFlowNet training methods, in contrast to existing ones resembling value-based RL. It is known that the design of backward policies in GFlowNet training affects efficiency. We further develop a coupled training strategy that jointly solves GFlowNet forward policy training and backward policy design. Performance analysis is provided with a theoretical guarantee of our policy-based GFlowNet training. Experiments on both simulated and real-world datasets verify that our policy-based strategies provide advanced RL perspectives for robust gradient estimation to improve GFlowNet performance. Our code is available at: github.com/niupuhua1234/GFN-PG.}
}

About

Code for the ICML 2024 paper 'GFlowNet Training by Policy Gradients'

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages