Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use sycl::bfloat16 class and functions instead of float casts. #1341

Open
JackAKirk opened this issue Oct 4, 2023 · 9 comments
Open

Use sycl::bfloat16 class and functions instead of float casts. #1341

JackAKirk opened this issue Oct 4, 2023 · 9 comments
Assignees
Labels
enhancement New feature or request

Comments

@JackAKirk
Copy link
Contributor

JackAKirk commented Oct 4, 2023

The bfloat16 class has been non-experimental for a while now, supporting all backends: #1286
However SYCLomatic appears to be not be using this, and instead just always casting to float, see e.g.#1286.
This seems to be a lost opportunity. For example there are native cuda bfloat16 implementations of bfloat16 math functions in DPC that make bfloat16 math much faster than using casts to float.

@JackAKirk JackAKirk added the enhancement New feature or request label Oct 4, 2023
@JackAKirk
Copy link
Contributor Author

@JackAKirk JackAKirk changed the title Use sycl bfloat16 class and functions instead of float casts. Use sycl::bfloat16 class and functions instead of float casts. Oct 4, 2023
@tomflinda
Copy link
Contributor

@JackAKirk the reference PR #1286 for sentence "The bfloat16 class has been non-experimental for a while now, supporting all backends" is incorrect, could you provide the correct PR? So that we can double confirm that bfloat16 class has been non-experimental.
Thanks.

@JackAKirk
Copy link
Contributor Author

@JackAKirk the reference PR #1286 for sentence "The bfloat16 class has been non-experimental for a while now, supporting all backends" is incorrect, could you provide the correct PR? So that we can double confirm that bfloat16 class has been non-experimental. Thanks.

Sorry I meant this one: intel/llvm#6524

Note actually that I forgot bfloat16 math functions are still in the experimental namespace intel/llvm#7567
However these bfloat16 math functions have generic support for all backends, at least via float emulation.
So it might be appropriate to move them out of experimental to match the bfloat16 class. @rdeodhar what do you think?

@rdeodhar
Copy link
Contributor

rdeodhar commented Oct 9, 2023

I think it would be OK to move the math functions out of experimental. @gmlueck do you have an opinion?

@gmlueck
Copy link
Contributor

gmlueck commented Oct 9, 2023

I think this could be OK. I'd like to consider merging the math functions into the base extension for bfloat16, though, rather than having two separate extensions.

@JackAKirk
Copy link
Contributor Author

I think this could be OK. I'd like to consider merging the math functions into the base extension for bfloat16, though, rather than having two separate extensions.

Sounds good to me. I'd be happy to draft a PR merging the two extensions.

@tomflinda
Copy link
Contributor

@gmlueck @JackAKirk
So, after the bfloat16 math functions are merged into base extension, we will plan to refine the migration logic in SYCLomatic, pls remind us after your PR is merged.

@JackAKirk
Copy link
Contributor Author

@gmlueck @JackAKirk So, after the bfloat16 math functions are merged into base extension, we will plan to refine the migration logic in SYCLomatic, pls remind us after your PR is merged.

PR is here: intel/llvm#11506

@tangjj11 tangjj11 self-assigned this Sep 23, 2024
@tangjj11
Copy link
Contributor

Hi, @JackAKirk. The PR (intel/llvm#11506) is in draft status for about one year, so we need wait.
By the way, you can migrate bf16 math function by using --use-experimental-features=bfloat16_math_functions. We have already support migration bf16 math function to sycl::ext::oneapi::experimental math funtion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

5 participants