Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reuse the IoVecBuffer on TX #4589

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

JackThomson2
Copy link
Contributor

Changes

Removes the allocation of the IoVecBuffer in the virtio net device during tx by reusing the same instance for each message.

To do this we have to implement Send on the type for it to work, I have tried to cover the unsafe sections the best I can.

Reason

Reduce the number of allocation

Ticket here: #4549

License Acceptance

By submitting this pull request, I confirm that my contribution is made under
the terms of the Apache 2.0 license. For more information on following Developer
Certificate of Origin and signing off your commits, please check
CONTRIBUTING.md.

PR Checklist

  • If a specific issue led to this PR, this PR closes the issue.
  • The description of changes is clear and encompassing.
  • Any required documentation changes (code and docs) are included in this
    PR.
  • API changes follow the Runbook for Firecracker API changes.
  • User-facing changes are mentioned in CHANGELOG.md.
  • All added/changed functionality is tested.
  • New TODOs link to an issue.
  • Commits meet
    contribution quality standards.

  • This functionality cannot be added in rust-vmm.

Copy link
Contributor

@bchalios bchalios left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a couple of comments regarding unnecessary SAFETY comments. Other than that LGTM

///
/// The descriptor chain cannot be referencing the same memory location as another chain
pub unsafe fn from_descriptor_chain(head: DescriptorChain) -> Result<Self, IoVecError> {
// SAFETY: New buffer is created from the DescriptorChain which doesnt implement clone
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is that useful here?

@@ -197,6 199,8 @@ impl Net {
activate_evt: EventFd::new(libc::EFD_NONBLOCK).map_err(NetError::EventFd)?,
mmds_ns: None,
metrics: NetMetricsPerDevice::alloc(id),
// SAFETY: Only constructed in the VMM thread so no concurrent buffers
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need to have a SAFETY comment for this. calling IoVecBuffer::default() is not unsafe

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as before

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, I think these might be left-overs from when new was unsafe?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes must of been left over my bad

src/vmm/src/devices/virtio/iovec.rs Show resolved Hide resolved
@@ -197,6 199,8 @@ impl Net {
activate_evt: EventFd::new(libc::EFD_NONBLOCK).map_err(NetError::EventFd)?,
mmds_ns: None,
metrics: NetMetricsPerDevice::alloc(id),
// SAFETY: Only constructed in the VMM thread so no concurrent buffers
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, I think these might be left-overs from when new was unsafe?

continue;
}
};
// SAFETY: This descriptor chain is only loaded into this buffer
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's elaborate on these - virtio requests are handled sequentially, so no two iovecbuffers are ever "live" at the same time, meaning this one really has exclusive ownership over the memory (well, from rust's side.. the guest can do as it pleases)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should clear() this buffer as soon as we're done using it (e.g. at the end of this function)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds great thanks I'll update the comments and clear at the end thanks

@@ -124,7 124,8 @@ impl VsockPacket {
/// - [`VsockError::DescChainTooShortForPacket`] if the contained vsock header describes a vsock
/// packet whose length exceeds the descriptor chain's actual total buffer length.
pub fn from_tx_virtq_head(chain: DescriptorChain) -> Result<Self, VsockError> {
let buffer = IoVecBuffer::from_descriptor_chain(chain)?;
// SAFETY: chain is only loaded into a single buffer
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment as above :)

@roypat roypat added the Status: Awaiting author Indicates that an issue or pull request requires author action label Jul 1, 2024
On the net virtio device reuse the IoVecBuffer on the TX path

Signed-off-by: Jack Thomson <[email protected]>
Copy link

codecov bot commented Jul 9, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 82.13%. Comparing base (91f68d4) to head (a64f3d5).

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #4589    /-   ##
=======================================
  Coverage   82.12%   82.13%           
=======================================
  Files         255      255           
  Lines       31281    31291    10     
=======================================
  Hits        25689    25700    11     
  Misses       5592     5591    -1     
Flag Coverage Δ
4.14-c5n.metal 79.63% <100.00%> ( 0.01%) ⬆️
4.14-m5n.metal 79.61% <100.00%> ( 0.01%) ⬆️
4.14-m6a.metal 78.83% <100.00%> ( <0.01%) ⬆️
4.14-m6g.metal 76.64% <100.00%> ( 0.01%) ⬆️
4.14-m6i.metal 79.61% <100.00%> ( 0.01%) ⬆️
4.14-m7g.metal 76.64% <100.00%> ( 0.01%) ⬆️
5.10-c5n.metal 82.14% <100.00%> ( <0.01%) ⬆️
5.10-m5n.metal 82.13% <100.00%> ( 0.01%) ⬆️
5.10-m6a.metal 81.44% <100.00%> ( <0.01%) ⬆️
5.10-m6g.metal 79.42% <100.00%> ( 0.01%) ⬆️
5.10-m6i.metal 82.12% <100.00%> ( <0.01%) ⬆️
5.10-m7g.metal 79.42% <100.00%> ( 0.01%) ⬆️
6.1-c5n.metal 82.14% <100.00%> ( <0.01%) ⬆️
6.1-m5n.metal 82.12% <100.00%> ( <0.01%) ⬆️
6.1-m6a.metal 81.43% <100.00%> ( 0.01%) ⬆️
6.1-m6g.metal 79.42% <100.00%> ( 0.01%) ⬆️
6.1-m6i.metal 82.12% <100.00%> ( <0.01%) ⬆️
6.1-m7g.metal 79.42% <100.00%> ( 0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Awaiting author Indicates that an issue or pull request requires author action
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants