Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for Any, All operations to Tensor #1342

Merged
merged 7 commits into from
Feb 23, 2024
Merged

Add support for Any, All operations to Tensor #1342

merged 7 commits into from
Feb 23, 2024

Conversation

ashdtu
Copy link
Contributor

@ashdtu ashdtu commented Feb 21, 2024

Pull Request Template

Checklist

  • Confirmed that run-checks all script has been executed.
  • Made sure the book is up to date with changes in this PR.

Related Issues/PRs

#1341

Changes

  • Add support for any() (logical_or), all() (logical_and) operations on Tensor struct for all types (Float, Int, Bool).
  • For numeric tensor type(Float, Int), the input tensors are casted to Bool by checking against non-zero values.
  • The implementation supports working with these operations along a dim using the any_dim(), all_dim() methods.
  • Burn book updated with these Tensor ops.

Testing

Test cases written for the implemented methods (any(), any_dim(), all(), all_dim()) with all tensor types(float, int, bool) in burn-tensor/src/tests/ops/any.rs or all.rs.

@ashdtu ashdtu requested a review from louisfd February 21, 2024 20:19
@ashdtu ashdtu added the feature The feature request label Feb 21, 2024
@ashdtu
Copy link
Contributor Author

ashdtu commented Feb 21, 2024

@louisfd The bug in WGPU about the boolean casting to float/int is no longer a blocker in the updated code. I noticed that I can get rid of that by just changing the logic a bit and adding another bool_not op which converts the weird large values of 1065353216 for (1.0 f / 1 ) to exact zeros. There's no overflow now since this op happens before the final sum, so it's all ok.

TL;DR There's no blocker for this PR and it works for all backends.

Copy link

codecov bot commented Feb 21, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 78.93%. Comparing base (4427768) to head (438a7c0).
Report is 3 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1342       /-   ##
==========================================
  Coverage   78.77%   78.93%    0.15%     
==========================================
  Files         551      563       12     
  Lines       61836    62981     1145     
==========================================
  Hits        48712    49714     1002     
- Misses      13124    13267      143     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Member

@louisfd louisfd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome! Thank you.
The logic is good everywhere. I do have some comments though, most of them minor but one is important, you must not do tensor.clone() if you're not obligated, this will augment reference count and may prevent operations to happen in-place. To make sure no occurrence of the clone is forgotten i commented them all 😄

crates/burn-tensor/src/tensor/api/base.rs Outdated Show resolved Hide resolved
crates/burn-tensor/src/tensor/api/bool.rs Outdated Show resolved Hide resolved
crates/burn-tensor/src/tensor/ops/bool_tensor.rs Outdated Show resolved Hide resolved
crates/burn-tensor/src/tensor/ops/bool_tensor.rs Outdated Show resolved Hide resolved
crates/burn-tensor/src/tensor/ops/bool_tensor.rs Outdated Show resolved Hide resolved
crates/burn-tensor/src/tensor/ops/tensor.rs Outdated Show resolved Hide resolved
crates/burn-tensor/src/tensor/ops/tensor.rs Outdated Show resolved Hide resolved
crates/burn-tensor/src/tests/mod.rs Show resolved Hide resolved
crates/burn-tensor/src/tests/ops/any.rs Show resolved Hide resolved
crates/burn-tensor/src/tests/ops/all.rs Show resolved Hide resolved
@louisfd
Copy link
Member

louisfd commented Feb 22, 2024

@ashdtu While we're here, can you change this LSTM test line in lstm.rs (which was the original reason I needed the any op):

 // Asserts the gradients exist and are non zero
 assert!(*some_gradient.abs().sum().into_data().value.first().unwrap() > 0.);

@ashdtu ashdtu requested a review from louisfd February 22, 2024 20:07
Copy link
Member

@louisfd louisfd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM
Thanks

@louisfd louisfd merged commit c86db83 into main Feb 23, 2024
15 checks passed
@louisfd louisfd deleted the feat/bool_ops branch February 23, 2024 15:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature The feature request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants