Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lower _linalg_svd #4537

Merged
merged 4 commits into from
Feb 1, 2023
Merged

Lower _linalg_svd #4537

merged 4 commits into from
Feb 1, 2023

Conversation

wonjoolee95
Copy link
Collaborator

@wonjoolee95 wonjoolee95 commented Jan 31, 2023

Fixes #4511


Lower _linalg_svd

This change lowers _linalg_svd by mostly reusing the existing svd lowering. However, there are a couple of notable differences between these two ops:

  1. torch.svd()'s some is the opposite of torch.linalg.svd()'s full_matrices. Note that default value for both is True, so the default behavior is effectively the opposite.
  2. torch.svd() returns V, whereas torch.linalg.svd() returns Vh (V transposed).
  3. If compute_uv is False, torch.svd() returns zero-filled tensors for U and Vh, whereas torch.linalg.svd() returns empty tensors.

As a result of this change, we also disable the following ops in xla/test/test_ops.py:

  • geqrf
  • linalg.det
  • linalg.inv_ex
  • linalg.matrix_rank
  • linalg.svd
  • linalg.svdvals

These ops all call into the newly lowered _linalg_svd and cause failure: Slice dim size 1 greater than dynamic slice dimension: 0. This failure seems to be also happening in our master branch for the already lowered op svd. In xla/test/test_ops.py, ops such as svd, pinverse, or nuclear_norm that call svd are failing due to the same error and are disabled. However, to ensure correctness for the newly lowered _linalg_svd, this PR adds unit tests and does not cause any regressing cpp test failures.

@wonjoolee95
Copy link
Collaborator Author

CI failing due to seemingly unrelated error: ERROR: test_view_copy_out_xla (__main__.TestViewOpsXLA). Let me try to rebase.

@wonjoolee95
Copy link
Collaborator Author

Update: this test seems like it's failing on head.

Copy link
Collaborator

@JackCaoG JackCaoG left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Feel free to merge after test is green

This change lowers _linalg_svd by mostly reusing the existing svd lowering. However, there are a couple of notable differences between these two ops:
1. torch.svd()'s `some` is the opposite of torch.linalg.svd()'s `full_matrices`. Note that default value for both is True, so the default behavior is effectively the opposite.
2. torch.svd() returns V, whereas torch.linalg.svd() returns Vh (V transposed).
3. If `compute_uv` is False, torch.svd() returns zero-filled tensors for U and Vh, whereas torch.linalg.svd() returns empty tensors.
@JackCaoG
Copy link
Collaborator

JackCaoG commented Feb 1, 2023

crashed with frame info

        xla::Shape const* ConsumeValue<xla::Shape const*>(tsl::StatusOr<xla::Shape const*>&&)
	torch_xla::XlaHelpers::ShapeOfXlaOp(xla::XlaOp)
	torch_xla::SVD::Lower(torch_xla::LoweringContext*) const
	torch_xla::LoweringContext::LowerNode(torch::lazy::Node const*)

seems like some shape in SVD is invalid..

@wonjoolee95
Copy link
Collaborator Author

Seems like I missed an op in the test/test_ops.py that fails with the same error message in the PR summary. Let me re-run the entire op locally and confirm. If it's failing due to the same SVD shape, I'll comment that one out as well.

@wonjoolee95
Copy link
Collaborator Author

Added some more ops failing due to the same existing error with svd, including some more linalg methods along with lu and qr that XLA currently doesn't support.

Now locally, the tests are passing but I'll let the CI verify.

test/test_ops.py Outdated Show resolved Hide resolved
@wonjoolee95 wonjoolee95 requested a review from JackCaoG February 1, 2023 18:10
@JackCaoG JackCaoG merged commit b205042 into master Feb 1, 2023
mateuszlewko pushed a commit that referenced this pull request Mar 15, 2023
* Lower _linalg_svd

This change lowers _linalg_svd by mostly reusing the existing svd lowering. However, there are a couple of notable differences between these two ops:
1. torch.svd()'s `some` is the opposite of torch.linalg.svd()'s `full_matrices`. Note that default value for both is True, so the default behavior is effectively the opposite.
2. torch.svd() returns V, whereas torch.linalg.svd() returns Vh (V transposed).
3. If `compute_uv` is False, torch.svd() returns zero-filled tensors for U and Vh, whereas torch.linalg.svd() returns empty tensors.

* Run linter

* Disable some more tests

* Add comments to the disabled tests
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Funtions without XLA compilations
2 participants