Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[object_store] Should object_store retry on broken pipe errors when calling put? #5545

Closed
ClementTsang opened this issue Mar 22, 2024 · 4 comments
Labels
question Further information is requested

Comments

@ClementTsang
Copy link

Which part is this question about

object_store

Describe your question

We've been using object_store, and occasionally, we see logs like this (some details omitted for brevity) when calling put():

2024-03-20T14:22:18.239+00:00 ERROR (src/rust/apps/flusher/src/upload_task.rs:50) - Error while trying to flush file to object store: error resolving file 'file[...]': internal error
Caused by:
    0: internal error
    1: Generic S3 error: Error after 0 retries in 1.096125ms, max_retries:10, retry_timeout:60s, source:error sending request for url (https://s3.us-east-1.amazonaws.com/...): error writing a body to connection: Broken pipe (os error 32)
    2: Error after 0 retries in 1.096125ms, max_retries:10, retry_timeout:60s, source:error sending request for url (https://s3.us-east-1.amazonaws.com/...): error writing a body to connection: Broken pipe (os error 32)
    3: error sending request for url (https://s3.us-east-1.amazonaws.com/...): error writing a body to connection: Broken pipe (os error 32)
    4: error writing a body to connection: Broken pipe (os error 32)
    5: Broken pipe (os error 32)
  ...

This is despite having retries configured, so this was confusing to us why it was saying 0 retries. Should this be something that gets retried?

Additional context

This seems to be a similar issue to #5106.

I've also done a bit of digging - this looks like it's a hyper error when there's a BodyWrite error.

It looks like in object_store we check for hyper errors here, but it looks like the BodyWrite error isn't checked. Unfortunately, I don't think hyper exposes a public interface for this check at the moment.

For now what we've done is manually wrap the put with our own retry and check for the error Display implementation, but that's pretty jury-rigged.

@ClementTsang ClementTsang added the question Further information is requested label Mar 22, 2024
@tustvold
Copy link
Contributor

tustvold commented Mar 23, 2024

Could you provide a bit of context on how you're running into this issue. I wonder if you are multiplexing CPU bound tasks on the same threadpool and thereby starving out the IO tasks? Possibly something similar to #5366

Unfortunately, I don't think hyper exposes a public interface for this check at the moment.

Perhaps you might file an upstream issue in the hyper repo to get feedback on exposing this upstream? I'd be interested to know the circumstances in which this error might occur, and if there is a reason it isn't currently exposed

@ClementTsang
Copy link
Author

ClementTsang commented Mar 23, 2024

We're doing a lot of frequent writes, so that could be possible. It seems to be pretty sporadic when it happens. (EDIT: I'll try and reproduce it with a more minimal example next week)

I can file an issue upstream to ask, thanks for the pointer.

@westonpace
Copy link
Member

Could you provide a bit of context on how you're running into this issue.

I encountered this performing a GET request (part of ObjectStore::head) to fetch an objects size. We get failures like this very infrequently so I'm assuming it is just an exceptional event on the cloud storage servers. We used to have this problem more reliably with get_range but we ended up surrounding get_range with our own retry loop (we now also surround size with a similar retry loop so this isn't urgent for us).

I wonder if you are multiplexing CPU bound tasks on the same threadpool and thereby starving out the IO tasks?

Yes, we do perform quite a few tasks in parallel, but we do quite a bit of profiling and CPU bound tasks shouldn't generally be starving the thread pool.

Perhaps you might file an upstream issue in the hyper repo to get feedback on exposing this upstream?

I don't have any evidence to believe that hyper is doing anything incorrect here. I think a much more likely explanation is an error / bug in the GCS server (or some kind of throttling / load balancing component).

@tustvold
Copy link
Contributor

Closed by #5609

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants