Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unix domain sockets #1623

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

flash-freezing-lava
Copy link

Based on the previous discussions in #39, here is an implementation, to configure a unix domain socket as a proxy.

This adds the ProxyScheme ProxyScheme::UnixSocket and the convenience function Proxy::unix, to avoid an unnecessary Result, when the operation can't fail. For parsing urls to proxies, this uses the unix scheme like unix:///path/to/socket.

@@ -294,6 +296,15 @@ impl Connector {
self.verbose.0 = enabled;
}

#[cfg(unix)]
async fn connect_unix_socket<P: AsRef<Path>>(&self, socket: P) -> Result<Conn, BoxError> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Drive-by note: not all Unix sockets are addressed as a pathname. There are also abstract ones (somehow common) and unnamed (quite rare), see section Address Format at https://man7.org/linux/man-pages/man7/unix.7.html.

As it is a new API, it would be nice not to constrain it too much to filepaths.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I only implemented path-based unix sockets, because the unix_socket_abstract feature is still unstable in the rust standard library. Also, I don't know a use-case, where I could test an HTTP-over-abstract-UDS implementation in a real scenario. Unlike the implementation for path-based unix sockets, that I have used a few months for interaction with snapd and tested with dockerd. I only know abstract unix sockets from libqb's IPC, which doesn't use HTTP. And I have never before heard of unnamed unix sockets, so it would be a bad idea for me to implement something for it.

Considering reqwest is still pre-1.0.0, I think we can implement path-based unix sockets for now, without spoiling abstract unix socket support in the future. And abstract unix sockets are somewhat different anyway, because they are only a linux-specific extension and not generally available on cfg(unix).

src/connect.rs Outdated
pub async fn connect<P: AsRef<Path>>(socket: P) -> Result<TcpStream, BoxError> {
let target_stream = UnixStream::connect(&socket).await?;
let owned_fd: OwnedFd = target_stream.into_std()?.into();
let stream = std::net::TcpStream::from(owned_fd);
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is perfectly fine? To simply cast a Unix Domain Stream into a TCP stream? If that's the case, it'd be useful to include in the comments a link to the proof of that.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To argue that it works to make TcpStream use a file descriptor created by UnixStream, we can split the usage in 2 scenarios:

  1. What happens if TcpStream is used for protocol-agnostic things like reading and writing? In this scenario we have to make sure, that our connection behaves correctly as a HTTP-over-UDS.
  2. What happens, if TcpStream assumes the fd to have the AF_INET or AF_INET6 af_family and tries to use protocol-specific operations like calling setsockopt with level = IPPROTO_IP? This would happend for example if std::net::tcp::TcpStream::set_ttl would be called. In this scenario, our HTTP-over-UDS doesn't have any sensible Ok behavior (in the example, because unix socket connections don't have a TTL), so we only have to make sure, we give a reasonable error message and don't crash.

Reasoning for the first scenario:
Comparing the common fn's between tokio::net::unix::stream::UnixStream and tokio::net::tcp::stream::TcpStream (f.e. ready, readable, poll_read_ready, try_read, try_read_vectored, try_read_buf, writable, poll_write_ready, ...), they are exactly the same. They use the same code to forward to mio::net::uds::stream and mio::net::TcpStream respectively, who's std::io::Read and std::io::Write implementations are also the same and forward to std::net. Both std::net::UnixStream and std::net::TcpStream forward their protocol-agnostic calls to std::sys::unix::net::Socket, so the common operations always result in the same system calls, regardless of whether we issue them through tokio::net::unix::stream::UnixStream or tokio::net::tcp::stream::TcpStream. It doesn't matter which of them we use. This also won't change in the future, because the fact that all sockets are used the same¹, is build into the POSIX system interfaces specification. The specifications of recv, send, read, etc. don't define any af_family dependent behavior.

Reasoning for the second scenario:
On unix, the calls on TcpStream get forwarded to std::sys::unix::net. All OS-Calls there are wrapped in io::Result. A non-sense IPPROTO_IP specific call like one to setsockopt, would result in a ENOPROTOOPT OS-error, which translates to rust's io::Result::Err with ErrorKind::Uncategorized. Similarily trying to use an operation, that relies on a a INET specific sockaddr layout, (for example if we try to reconnect to another IP addr with a new connect syscall), the OS would return a EAFNOSUPPORT, which again translates to io::Result::Err with ErrorKind::Uncategorized in rust. This behavior isn't particulary helpful, because the error message has to be analyzed with io::Error::raw_os_error and the libc-crate, to get a useful error message. But considering a IPPROTO_IP specific operation can't be done by a HTTP-over-UDS connection, returning io::Error is the intended behavior.

¹ Exept for the sockaddr variant in connect calls, which raises a EAFNOSUPPORT error, if it doesn't match the af_family. But we only reinterpret the fd after the connect call.

@oxalica
Copy link

oxalica commented Jan 20, 2023

For parsing urls to proxies, this uses the unix scheme like unix:///path/to/socket.

As a reference, curl's --proxy uses sock5h://localhost/path/to/socket.sock syntax for UNIX domain socket proxy. I prefer theirs since it explicitly indicates it uses SOCKS5 rather than HTTP proxy.

get9 added a commit to get9/reqwest that referenced this pull request May 15, 2023
get9 added a commit to get9/reqwest that referenced this pull request May 15, 2023
get9 added a commit to get9/reqwest that referenced this pull request May 15, 2023
get9 added a commit to get9/reqwest that referenced this pull request May 15, 2023
get9 added a commit to get9/reqwest that referenced this pull request May 24, 2023
@PureWhiteWu
Copy link
Sponsor

Hi, is there any progress on this pr?

This is really useful for us! Thanks!

@svenstaro
Copy link
Contributor

What's the status of this? Is there anything left to do here?

@domenkozar
Copy link

@flash-freezing-lava does this work with the blocking client?

@flash-freezing-lava
Copy link
Author

@domenkozar Yes, that works too.

@domenkozar
Copy link

@flash-freezing-lava I'm happy to give this a go and report back, can you rebase it?

@flash-freezing-lava
Copy link
Author

There is the https://github.com/flash-freezing-lava/reqwest/tree/dev branch, that is merged with a recent version of reqwest. That one also doesn't require casting sockets. I don't remember, why I thought it necessary when writing the original implementation, but it is not necessary now. I rebase this PR later, but you can test with the dev branch now.

@domenkozar
Copy link

@flash-freezing-lava any reason why reqwest::blocking::get("unix:///tmp/my.sock") isn't supported?

@flash-freezing-lava
Copy link
Author

flash-freezing-lava commented Apr 20, 2024

What Request-URI would such an HTTP request have in its Request-Line? We could use "*", which is allowed by HTTP, but I havn't seen it in practical use or as an option in other HTTP clients, so I did not consider it.

@svenstaro
Copy link
Contributor

svenstaro commented Jul 13, 2024

@flash-freezing-lava seems to me it might make sense to rebase this PR onto https://github.com/flash-freezing-lava/reqwest/tree/dev and then have the reqwest team review this here? Would be great to finally have this upstream!

Preferrably, we get a Connection impl for UnixStream
into hyper_util to avoid the new UnixStreamWrapper.

Closes seanmonstar#39
@flash-freezing-lava
Copy link
Author

Done. Branch is ready for review and squashed to 1 commit.

@hunjixin
Copy link

hunjixin commented Aug 7, 2024

looking forward to the new version

@iongion
Copy link

iongion commented Aug 19, 2024

Windows also supports unix domain sockets using named pipes. Can it be not limited to unix only ?
I am using this with electron and nodejs axios + http adapter (for node) and the user doesn't need to specify anything else than the path to the socket or named pipe.

@Macil
Copy link

Macil commented Aug 20, 2024

@iongion Windows has direct support for actual Unix domains sockets too as of a few years ago, though Tokio still doesn't support them on Windows yet (tokio-rs/tokio#2201). Having support for Windows named pipes in Reqwest might be good too, but they shouldn't be treated as synonyms or accessed through the same Reqwest APIs added in this PR.

@iongion
Copy link

iongion commented Aug 20, 2024

@Macil yup, correct, one a time. I am trying to move an electron app to tauri that expose an http client based on reqwest. The app does docker/podman socket communication and there is blocking dependency on being able to access the socket in a cross-platform manner - named pipes too.

Thank you for all this effort, I can't find any solution for now that works for all scenarios, unix domain socket and named pipes so waiting for this PR anxiously!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

10 participants