Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] AbsorbingStateTransform #2290

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from
Draft

Conversation

BY571
Copy link
Contributor

@BY571 BY571 commented Jul 12, 2024

Description

Adds AbsorbingStateTransform as used in the DAC paper.

Motivation and Context

Why is this change required? What problem does it solve?
If it fixes an open issue, please link to the issue here.
You can use the syntax close #15213 if this solves the issue #15213

  • I have raised an issue to propose this change (required for new features and bug fixes)

Types of changes

What types of changes does your code introduce? Remove all that do not apply:

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds core functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation (update in the documentation)
  • Example (update in the folder of examples)

Checklist

Go over all the following points, and put an x in all the boxes that apply.
If you are unsure about any of these, don't hesitate to ask. We are here to help!

  • I have read the CONTRIBUTION guide (required)
  • My change requires a change to the documentation.
  • I have updated the tests accordingly (required for a bug fix or a new feature).
  • I have updated the documentation accordingly.

Copy link

pytorch-bot bot commented Jul 12, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/rl/2290

Note: Links to docs will display an error until the docs builds have been completed.

❌ 5 New Failures, 1 Unrelated Failure

As of commit 1d43d8b with merge base 8e43ac8 (image):

NEW FAILURES - The following jobs have failed:

BROKEN TRUNK - The following job failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jul 12, 2024
@BY571
Copy link
Contributor Author

BY571 commented Jul 12, 2024

Will update the docstring with examples.
@vmoens might need your help on the tests.

Copy link
Contributor

@vmoens vmoens left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this!
Not entirely sure about the implementation, I think it'll break in many (edge) cases.
Can you open an issue asking for the feature to talk about the proper way of handling this?

>>> from torchrl.envs import GymEnv
>>> t = AbsorbingStateTransform(max_episode_length=1000)
>>> base_env = GymEnv("HalfCheetah-v4")
>>> env = TransformedEnv(base_env, t)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's not very informative about the functionality ;)

terminate_key: Optional[NestedKey] = "terminated",
):
if in_keys is None:
in_keys = "observation" # default
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

["observation"] no?

batch_size = observation.size(0)
if self._done:
# Create absorbing states for the batched observations
absorbing_state = torch.eye(observation.size(1) + 1)[-1]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is rather wasteful, we creating a big tensor and indexing it, plus this is a view on a storage hence the original storage isn't cleared when you index.

Besides it lacks dtype and device.

You can create an incomplete eye with m and n, see the doc here

# Create absorbing states for the batched observations
absorbing_state = torch.eye(observation.size(1) + 1)[-1]
return absorbing_state.expand(batch_size, -1)
zeros = torch.zeros(batch_size, 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing device and dtype

You could use observation.new_zeros

@@ -8557,3 +8557,157 @@ def _inv_call(self, tensordict):
if self.sampling == self.SamplingStrategy.RANDOM:
action = action + self.jitters * torch.rand_like(self.jitters)
return tensordict.set(self.in_keys_inv[0], action)


class AbsorbingStateTransform(ObservationTransform):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need tests for this class
It should be registered in the __init__.py and put in the doc.

def forward(self, tensordict: TensorDictBase) -> TensorDictBase:
raise RuntimeError(FORWARD_NOT_IMPLEMENTED.format(type(self)))

def _apply_transform(self, observation: torch.Tensor) -> torch.Tensor:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is a version of this that works for all batch sizes. This one will only work with uni of bidimensional batch sizes.

elif observation.dim() == 2:
# Batched observations
batch_size = observation.size(0)
if self._done:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need an in-place value? How does that work if one sub-env is done and the other not?
Maybe we could read the done state and change it on the fly, without using local attribute

)
return tensordict
done = tensordict.get(self.done_key)
self._done = done.any()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this means that if any sub-env is done all are done?

# Single observation
if self._done:
# Return absorbing state which is [0, ..., 0, 1]
return torch.eye(observation.size(0) + 1)[-1]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what if the observation is more than 1d?

Copy link
Contributor

@vmoens vmoens left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry wrongfully approved

@vmoens vmoens added enhancement New feature or request CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. and removed CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. labels Jul 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants