Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix access to VMMemoryDefinition::current_length on big-endian #3013

Merged
merged 1 commit into from
Jun 23, 2021

Conversation

uweigand
Copy link
Member

The current_length member is defined as "usize" in Rust code,
but generated wasm code refers to it as if it were "u32". Not
sure why this is case, and it happens to work on little-endian
machines (as long as the length is < 4GB), but it will always
fail on big-endian machines.

Add the minimal fixes to make it work on big-endian as well.
(Note: this patch does not attempt to fix the actual underlying
type mismatch ...)

@alexcrichton
Copy link
Member

This makes me much more worried than the patch fix here implies I think, I would have figured that the solution would be to either change generated wasm code to load this field with native endianness or it would be to always store the field in little-endian. In any case though it definitely seems like wasm and native should at least agree on the size of the field?

How does the current patch fix tests? Would it be possible to implement a deeper fix?

@uweigand
Copy link
Member Author

This makes me much more worried than the patch fix here implies I think, I would have figured that the solution would be to either change generated wasm code to load this field with native endianness or it would be to always store the field in little-endian. In any case though it definitely seems like wasm and native should at least agree on the size of the field?

The current code stores the field in native endianness as 8 bytes, and loads the field in native endianness as 4 bytes (taking the first 4 bytes of the 8 byte field). That works if native endian is little. If native endian is big, it works with this patch where the 4 bytes are now taken to be the last 4 bytes (instead of the first 4 bytes) of the 8 byte field.

I do not know why the sizes do not match, that is really the underlying problem, I'd say. I didn't try to fix this since this was already flagged as a TODO (with the test that the sized match commented out), so I assumed a fix would be non-trivial.

@alexcrichton
Copy link
Member

Oh I see, yeah that makes sense how the fix works. I think though there's no fundamental reason why there's a disagreement here, so the best fix would be to align the sizes of the loads/stores between Rust & wasm

@uweigand
Copy link
Member Author

Oh I see, yeah that makes sense how the fix works. I think though there's no fundamental reason why there's a disagreement here, so the best fix would be to align the sizes of the loads/stores between Rust & wasm

So what should the size be? It seems Rust wants to use usize since that fits better into the Rust type system (natural type for lengths/sizes), while the wasm code wants to use u32 for more efficient guard code generation?

@alexcrichton
Copy link
Member

I would naively say that the JIT code should guide us here since that's probably the most performance sensitive, but at the same time if this is a u32 then it also means that there's not actually support for 4GB memory if the value stored here needs to be the length of the memory 1 (which I thought it was?)

This means that we may need to store a usize for full 4gb memory support, but that might have a greater effect on the performance of jit code, I don't know how much the u32 aspect is relied on for bounds checks.

@uweigand
Copy link
Member Author

So from what I can see, the cranelift JIT makes the hard-coded assumption that the gv used as bound (which is where current_length ends up) must be of the same type as the index_type of the (dynamic) heap in question. And the index type of all heaps generated by current code (at least in make_heap in crates/cranelift/src/func_environ.rs, not sure if there are other places) is always hardcoded to types::I32. It does seem a bad idea to change this, as that would force all the index computations to be performed as 64-bit operations ...

Of course, this means that the dynamic bound checks are currently simply wrong for a heap size of 4GB or more. However, it seems that the main use case of the 4GB heap (SpiderMonkey) uses the static heap type anyway?

On the other hand, I noticed the same current_length field also seems to used by the lightbeam back-end, which apparently treats it always as a 64-bit value if I understand the magic dynasm! macro correctly:
https://github.com/bytecodealliance/wasmtime/blob/main/crates/lightbeam/src/backend.rs#L1902

All this makes me more hesitant to attempt to change this logic ...

@alexcrichton
Copy link
Member

Ah ok that makes sense. Could this be updated to be a u32 stored, and an issue filed about how 4gb heaps have the wrong bounds checks?

@uweigand
Copy link
Member Author

Ah ok that makes sense. Could this be updated to be a u32 stored, and an issue filed about how 4gb heaps have the wrong bounds checks?

But that would break lightbeam, right? The current code I pointed to above seems to rely on the value having been stored as u64 (with the high bits zero) ...

@alexcrichton
Copy link
Member

Personally, I think that's ok. Lightbeam isn't tested at all and hasn't received maintenance in many months, I don't think this is the only part about it which is broken.

@uweigand
Copy link
Member Author

I gave it a quick try. If current_length is u32, this will now cause an assertion to be raised whenever someone attempts to set the field to 4GB or larger. However, this already triggered in the test suite (in instance::linear_memory_limits) ...

Unfortunately, I don't think I can just ignore the assertion, since current_length is also used for length checks in Rust code, so even if the dynamic heap method is used (so the problem with JITed code doesn't occur), having a u32 current_length field would then cause those Rust checks to be incorrect.

@bjorn3
Copy link
Contributor

bjorn3 commented Jun 22, 2021

For wasm32 the linear memory can be at most 4GB as it uses 32bit pointers. Wasmtime doesn't yet support wasm64 and once it does will have to change codegen between wasm32 and wasm64 anyway as a static heap is impossible even on 64bit systems, so changing the current_length from 32bit to 64bit in that case shouldn't be much of a problem I think.

@alexcrichton
Copy link
Member

Ah yes the test that specifically tries to grow to 4GB should be updated. It should grow to 4gb minus one page and then assert that growing the extra page fails. Basically Wasmtime is buggy right now with 4gb heaps so I don't think we should pretend they work by simply allowing the buggy code to also run on big-endian platforms, ideally we'd fix the issue outright here. We can have a documented limitation that Wasmtime supports 4gb heaps minus a page, and an issue tracking on fixing that limitation.

For length checks in Rust code I'm not sure I understand? I understand that u32 and usize are different types but I would imagine that the conversion needs to happen and we'd handle it wherever current_length was read/written.

@uweigand
Copy link
Member Author

My understanding is that a 4GB dynamic heap is currently broken due to incorrect checks in JITed code. However, a 4GB static heap works correctly (and is apparently used by SpiderMonkey if I'm reading the docs correctly).

In that latter case, current_length is currently set to 4GB, and that value is used in Rust code for various length checks. If we change current_length to a u32, it can no longer hold that value. If we instead set set u32 current_length to some other value, then the Rust length checks will be incorrect. If we simply disallow the 4GB case, then 4GB static heaps will also stop working, which I guess may break SpiderMonkey then ...

@alexcrichton
Copy link
Member

Can you clarify what you mean by spidermonkey? Do you mean the Cranelift integration in SpiderMonkey? Or something else?

(I'm not aware what this is in reference to so hard to comment on breaking it...)

@uweigand
Copy link
Member Author

I'm refering to this text: https://github.com/bytecodealliance/wasmtime/blob/main/cranelift/docs/ir.md#heap-examples

The SpiderMonkey VM prefers to use fixed heaps with a 4 GB bound and 2 GB of offset-guard pages when running WebAssembly code on 64-bit CPUs.

@alexcrichton
Copy link
Member

Ah ok, changing this in Wasmtime would have no effect on that. This isn't really a cranelift-level fix but rather a Wasmtime-level fix.

@uweigand
Copy link
Member Author

Ah ok, changing this in Wasmtime would have no effect on that. This isn't really a cranelift-level fix but rather a Wasmtime-level fix.

I see, makes sense.

I've now tried to implement this:

It should grow to 4gb minus one page and then assert that growing the extra page fails.

by setting WASM_MAX_PAGES to 65535 instead of 65536, but then spec tests start to fail as they use (memory 65536) -- is this required by the WebAssembly spec?

@alexcrichton
Copy link
Member

Oh right yeah we don't want to change the type-level maximum, only the runtime-level maximum. Modules which declare a max size of 65536 should still validate, just that when they request that much memory we say "sorry, no go". This'll I think want to change crates/runtime/src/memory.rs to reject grow and new calls with 65536 pages.

@uweigand
Copy link
Member Author

Oh right yeah we don't want to change the type-level maximum, only the runtime-level maximum. Modules which declare a max size of 65536 should still validate, just that when they request that much memory we say "sorry, no go". This'll I think want to change crates/runtime/src/memory.rs to reject grow and new calls with 65536 pages.

That approach seems to work. With the updated patch all tests pass on s390x as well. Thanks!

@github-actions github-actions bot added the wasmtime:api Related to the API of the `wasmtime` crate itself label Jun 22, 2021
@github-actions
Copy link

Subscribe to Label Action

cc @peterhuene

This issue or pull request has been labeled: "wasmtime:api"

Thus the following users have been cc'd because of the following labels:

  • peterhuene: wasmtime:api

To subscribe or unsubscribe from this label, edit the .github/subscribe-to-label.json configuration file.

Learn more.

Copy link
Member

@alexcrichton alexcrichton left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great to me, thanks for this! I filed an issue and just have some minor comments about the structure of the new checks

crates/runtime/src/memory.rs Outdated Show resolved Hide resolved
crates/runtime/src/memory.rs Outdated Show resolved Hide resolved
crates/runtime/src/memory.rs Outdated Show resolved Hide resolved
The current_length member is defined as "usize" in Rust code,
but generated wasm code refers to it as if it were "u32".
While this happens to mostly work on little-endian machines
(as long as the length is < 4GB), it will always fail on
big-endian machines.

Fixed by making current_length "u32" in Rust as well, and
ensuring the actual memory size is always less than 4GB.
@alexcrichton alexcrichton merged commit 83007b7 into bytecodealliance:main Jun 23, 2021
@uweigand uweigand deleted the fix-currentlength branch June 23, 2021 16:46
alexcrichton added a commit to alexcrichton/wasmtime that referenced this pull request Jun 24, 2021
Wasmtime was updated to reject creation of memories exactly 4gb in size
in bytecodealliance#3013, but the fuzzers still had the assumption that any request to
create a host object for a particular wasm type would succeed.
Unfortunately now, though, a request to create a 4gb memory fails. This
is an expected failure, though, so the fix here was to catch the error
and allow it.
alexcrichton added a commit to alexcrichton/wasmtime that referenced this pull request Jun 24, 2021
Wasmtime was updated to reject creation of memories exactly 4gb in size
in bytecodealliance#3013, but the fuzzers still had the assumption that any request to
create a host object for a particular wasm type would succeed.
Unfortunately now, though, a request to create a 4gb memory fails. This
is an expected failure, though, so the fix here was to catch the error
and allow it.
alexcrichton added a commit that referenced this pull request Jun 24, 2021
Wasmtime was updated to reject creation of memories exactly 4gb in size
in #3013, but the fuzzers still had the assumption that any request to
create a host object for a particular wasm type would succeed.
Unfortunately now, though, a request to create a 4gb memory fails. This
is an expected failure, though, so the fix here was to catch the error
and allow it.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
wasmtime:api Related to the API of the `wasmtime` crate itself
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants