Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Interrupt enable/disable should be using compiler_fence. #11

Open
cr1901 opened this issue Aug 4, 2022 · 9 comments
Open

Interrupt enable/disable should be using compiler_fence. #11

cr1901 opened this issue Aug 4, 2022 · 9 comments

Comments

@cr1901
Copy link
Contributor

cr1901 commented Aug 4, 2022

For example, in cortex_m:

pub fn disable() {
    unsafe {
        asm!("cpsid i", options(nomem, nostack, preserves_flags));
    }

    // Ensure no subsequent memory accesses are reordered to before interrupts are disabled.
    compiler_fence(Ordering::SeqCst);
}

compiler_fence does not currently work properly for the msp430 backend. Thank you @eddyb for pointing this out to me.

barrier and nop are also likely affected; barrier needs a fence to prevent reordering, and both functions assume that Rust will not optimize out inline ASM (which I'm not sure is true).

@taiki-e
Copy link
Contributor

taiki-e commented Aug 4, 2022

IIUC, inline assembly with neither nomem nor readonly implies a (SeqCst) compiler fence (refs), so, the current usage in this crate should not be affected.

asm!("dint {{ nop");

asm!("nop {{ eint {{ nop");

asm!("nop");

asm!("");

@cr1901
Copy link
Contributor Author

cr1901 commented Aug 4, 2022

@taiki-e Okay, good to know (thank you for the link as well). Also, from what I understand since I wrote the issue, inline asm! won't ever be optimized out either unless it's given the pure flag. Since none of the asm! snippets in this crate use flags, we should be good.

However, compiler_fence really does need to be implemented properly for msp430 (at which point I can make barrier a call to compiler_fence(Ordering::SeqCst) for backwards compat). I'll probably change the title in the morning.

@taiki-e
Copy link
Contributor

taiki-e commented Aug 4, 2022

inline asm! won't ever be optimized out either unless it's given the pure flag

IIUC, this is due to "LLVM doesn't have attributes to represent ReadOnly/ReadNone SideEffect". EDIT: see #11 (comment)
Non-LLVM backends or LLVM in the future may support this, and if you use them, code that relies on the current LLVM behavior may run into problems.

EDIT2: I seem to have misunderstood your comment: #11 (comment)

@taiki-e
Copy link
Contributor

taiki-e commented Aug 4, 2022

IIUC, this is due to "LLVM doesn't have attributes to represent ReadOnly/ReadNone SideEffect"

Oh, considering that LLVM has InaccessibleMemOnly (ReadNone SideEffect), my understanding here is wrong.

(Is it actually because it is volatile?)

@eddyb
Copy link

eddyb commented Aug 4, 2022

To be clear I don't/didn't actually know what LLVM guarantees for inline assembly, I couldn't find any ordering interactions being described but I wouldn't be surprised to hear that (without nomem) it's equivalent to SeqCst.

@cr1901
Copy link
Contributor Author

cr1901 commented Aug 4, 2022

Is it actually because it is volatile?

Yes, volatile ("Do not optimize away. Do not reorder with other volatile code.") was what I had in mind.

I don't see why ReadOnly/ReadNone SideEffect would allow removing asm block offhand/can't think of example. But noted that maybe pure alone determining volatility is subject to change.

@taiki-e
Copy link
Contributor

taiki-e commented Aug 4, 2022

I don't see why ReadOnly/ReadNone SideEffect would allow removing asm block offhand/can't think of example. But noted that maybe pure alone determining volatility is subject to change.

volatile seems to guarantee that asm will not be removed, but volatile itself does not guarantee that (EDIT: non-volatile) memory accesses will not be reordered (compiler_fence guarantees it. EDIT: asm without nomem and readonly also guarantees it).

ReadNone SideEffect allows reordering (and omitting) of preceding and subsequent memory accesses because it ensures that asm does not perform memory accesses: godbolt

As for ReadOnly SideEffect, I think it should allow reordering of preceding and subsequent memory reads because it ensures that asm does not modify memory. (However, LLVM does not currently seem to support this optimization)

@cr1901
Copy link
Contributor Author

cr1901 commented Aug 4, 2022

volatile seems to guarantee that asm will not be removed, but it does not guarantee that memory accesses will not be reordered

I was confused because in a previous comment you said:

Non-LLVM backends or LLVM in the future may support this, and if you use them, code that relies on the current LLVM behavior may run into problems.

Even with your correction in the next comment, I interpreted your comment to mean:

"Right now, LLVM will treat inline asm as volatile as long as you don't use pure. However in future LLVM and other backends, the rules for whether inline asm is treated as volatile will be more intricate than 'was pure specified or not?'". An example of asm that LLVM may not treat as volatile in the future is ReadOnly/ReadNone SideEffect. I can't think of an example offhand where LLVM could reasonably optimize out asm code with side effects (InaccessibleMemOnly- how would LLVM be able to examine an asm code snippet to find out the set of memory locations actually modified?), but that doesn't mean they don't exist.

The above shouldn't affect this crate.

compiler_fence guarantees it

Additionally, as established writing asm without nomem and readonly implies a fence. At least I hope, because otherwise my asm as written in this crate is wrong :P.

@taiki-e
Copy link
Contributor

taiki-e commented Aug 4, 2022

Even with your correction in the next comment, I interpreted your comment to mean:

"Right now, LLVM will treat inline asm as volatile as long as you don't use pure. However in future LLVM and other backends, the rules for whether inline asm is treated as volatile will be more intricate than 'was pure specified or not?'".

Sorry, I thought your comment ("inline asm! won't ever be optimized ..." in #11 (comment)) was not discussing "can asm be removed", but rather regarding "does it allow reordering preceding and subsequent memory accesses" (which is a requirement for using it as a compiler fence). If I read it as a discussion of the former, my comments are indeed very confusing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants