Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PPO attention net (GTrXLNet) #176

Draft
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

RemiG3
Copy link

@RemiG3 RemiG3 commented Apr 11, 2023

Description

Add PPO attention network (GTrXLNet, paper: Stabilizing Transformers for Reinforcement Learning).
Comparisons have to be made (with the implementation of RLlib for example).

closes #165

Note: I have cleaned up most of the code, but it's still under development.

Context

  • I have raised an issue to propose this change (required)

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation (update in the documentation)

Checklist:

  • I've read the CONTRIBUTION guide (required)
  • The functionality/performance matches that of the source (required for new training algorithms or training-related features).
  • I have updated the tests accordingly (required for a bug fix or a new feature).
  • I have included an example of using the feature (required for new features).
  • I have included baseline results (required for new training algorithms or training-related features).
  • I have updated the documentation accordingly.
  • I have updated the changelog accordingly (required).
  • I have reformatted the code using make format (required)
  • I have checked the codestyle using make check-codestyle and make lint (required)
  • I have ensured make pytest and make type both pass. (required)

Note: we are using a maximum length of 127 characters per line.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a forgotten file?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, my bad

RemiG3 and others added 5 commits April 13, 2023 18:35
Convert numpy array to torch array (for evaluation)
Remove model call when episode starts (memory dimension and features sequence not always the same)
Add assertion sanity check on batch_size and n_steps (as in PPO)
@rrfaria
Copy link

rrfaria commented Aug 8, 2023

Hey @RemiG3

I hope everything is going well. 👋 I've been following the development of the attention PPO feature, and I'm really excited about the progress being made!

Could you provide an update on the current status of this feature? I'd love to know where it stands and if there's anything new to be excited about since the last time you commented.

I came across this example you shared:

from sb3_contrib.ppo_attention.ppo_attention import AttentionPPO
from sb3_contrib.ppo_attention.policies import MlpAttnPolicy

VE = DummyVecEnv([lambda: gym.make("CartPole-v1")])

model = AttentionPPO(
    "MlpAttnPolicy",
    VE,
    n_steps=240,
    learning_rate=0.0003,
    verbose=1,
    batch_size=12,
    ent_coef=0.03,
    vf_coef=0.5,
    seed=1,
    n_epochs=10,
    max_grad_norm=1,
    gae_lambda=0.95,
    gamma=0.99,
    device='cpu',
    policy_kwargs=dict(
        net_arch=dict(pi=[64, 32], vf=[64, 32]),
    )
)

Does it still work like this?

If there's any example available to better understand how this feature is being implemented or if it's already possible to test a prototype, I'd be incredibly grateful for any information in this regard.

Thank you so much for the hard work you're putting into this.

Many thanks, and I'm eagerly looking forward to your response. 🚀

@LeZheng-x
Copy link

In igibson, I compared the three algorithms PPO, Recurrent_PPO, and Attention_PPO. Unfortunately even if I try to change the network parameters of GTrXL, it works poorly and requires more training time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature Request] Add Attention nets (GTrXL model in particular)
4 participants