-
Notifications
You must be signed in to change notification settings - Fork 10
Switch from reqwest to bitreq HTTP client
#56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Replace the `reqwest` dependency with `bitreq`, a lighter HTTP(s) client that reduces code bloat and dependency overhead. Key changes: - Use `bitreq::Client` for connection pooling and reuse - Update all HTTP request handling to use bitreq's API - Remove `reqwest::header::HeaderMap` in favor of `HashMap<String, String>` - Simplify `LnurlAuthToJwtProvider::new()` to no longer return a `Result` - Use `serde_json::from_slice()` directly for JSON parsing - Build script uses bitreq's blocking `send()` method Co-Authored-By: HAL 9000 Signed-off-by: Elias Rohrer <dev@tnull.de>
|
👋 Thanks for assigning @tankyleo as a reviewer! |
| let auth_response = | ||
| auth_request.send_async().await.map_err(VssHeaderProviderError::from)?; | ||
| let lnurl_auth_response: LnurlAuthResponse = | ||
| serde_json::from_slice(&auth_response.into_bytes()).map_err(|e| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could use bitreq's json-using-serde which goes through UTF-8 parsing first, but this seemed simpler and followed what the reqwest::Response::json method did (cf https://docs.rs/reqwest/latest/src/reqwest/async_impl/response.rs.html#269-273). Not sure if anyone has an opinion here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
either is fine to me 🤷♂️
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tnull sounds to me like this is something we can upstream to bitreq no ? ie pass the Vec<u8> body straight into serde_json::from_slice, and skip the utf8 parsing, which seems a net plus.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tnull sounds to me like this is something we can upstream to bitreq no ? ie pass the
Vec<u8>body straight intoserde_json::from_slice, and skip the utf8 parsing, which seems a net plus.
Yeah, I considered that too. Just haven't fully made my mind up whether not validating UTF8 is okay. Maybe it's here, but not necessarily in the general case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC i would expect serde_json::from_slice to validate UTF-8 as necessary during deserialization...
tnull
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Honestly not sure why the idna crate suddenly started failing to build under 1.75.0, and we aim to soon replace the whole url dependency with our own Url type, but for now we can also just fix the issue by bumping MSRV to 1.85, since LDK Node already requires that anyways and we aim too soon also update LDK to that.
Wait, what? This is news to me. Can we just remove the cursed |
| let auth_response = | ||
| auth_request.send_async().await.map_err(VssHeaderProviderError::from)?; | ||
| let lnurl_auth_response: LnurlAuthResponse = | ||
| serde_json::from_slice(&auth_response.into_bytes()).map_err(|e| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
either is fine to me 🤷♂️
No, Matt, we discussed this plan for hours at the Rust Summit. We landed on that most ecosystem crates (BDK, etc) would update ~right away given their release cycles, but that LDK would defer bumping until after Ubuntu 2604 is out, which will of course happen this April.
Yeah, working on it as we speak. Just did a |
Right, April is a ways off though and we likely don't want to bump the day that a new release comes out (I'm kinda tired of us rushing to bump the day something comes out for the sake of it without a specific motivating rust feature, at least now that we've stripped out the stupid HTTP deps that keep causing dep-MSRV bumps), and I thought we wanted to upstream long before then lightningdevkit/rust-lightning#4323 |
| const APPLICATION_OCTET_STREAM: &str = "application/octet-stream"; | ||
| const DEFAULT_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10); | ||
| const DEFAULT_TIMEOUT_SECS: u64 = 10; | ||
| const MAX_RESPONSE_BODY_SIZE: usize = 500 * 1024 * 1024; // 500 MiB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now bumped this considerably, as of course 10MB would be much too small. Let me know if you have an opinion on a better value here @TheBlueMatt @tankyleo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mmm, yea, good question. On the server side we defaulted to 1G because its postgres' limit. Not that "just because its postgres' limit" is a good reason to do anything, though. 500 seems fine enough to me, honestly, though. As long as its documented I don't feel super strongly.
As 10 MB is def to small
Signed-off-by: Elias Rohrer <dev@tnull.de>
15114d4 to
ef17d83
Compare
| .with_body(request_body) | ||
| .with_timeout(DEFAULT_TIMEOUT_SECS) | ||
| .with_max_body_size(Some(MAX_RESPONSE_BODY_SIZE)) | ||
| .with_pipelining(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm bitreq says "Note that because pipelined requests may be replayed in case of failure, you should only set this on idempotent requests", and our PutObjectRequest is not idempotent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and our
PutObjectRequestis not idempotent
Why is it not idempotent? If the request fails, we also assume the HTTP client to retry? This is essentially the same, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm right i was thinking if we replay the same versioned put object request, the server will return an error instead of ok, and hence that call is not idempotent.
the concern here is request A is actually successful, version increments server side, but for some reason bitreq considers it a failure, replays the request, and becomes request B.
request B fails because of a version mismatch, and kicks off the retry policy client side, and eventually results in a failure getting bubbled back up to LDK-Node even though request A succeeded.
| parent_key: Xpriv, | ||
| url: String, | ||
| default_headers: HashMap<String, String>, | ||
| client: reqwest::Client, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we sure we want to delete the cached client here ? Thought we could cache it since the first get request in the lnurl auth flow is always to the same URL.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AFAIU, we we'll only retrieve a new JWT token if the old one expired, which should be in the order of days or at least ours. It seems to me that keeping a client around (and potentially even keeping a connection open) is a bunch of unnecessary overhead if we just make single requests that rarely.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sounds good to me thank you
| .map_err(VssHeaderProviderError::from)?; | ||
| let lnurl_request = bitreq::get(&self.url) | ||
| .with_headers(self.default_headers.clone()) | ||
| .with_timeout(DEFAULT_TIMEOUT_SECS); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know this is a trusted server, but do we want to set max body size here to prevent any possible OOM crashes ? same for the get request below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True, I overlooked that. Do you have a suggestion for a good limit here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How is 16KiB ? just from intuition after some brief googling, we are fetching a URL here.
| if status.is_success() { | ||
|
|
||
| let http_request = bitreq::post(url) | ||
| .with_header("content-type", APPLICATION_OCTET_STREAM) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: like you've done elsewhere, let's put this hard coded string in a constant ? Here and also in the lnurl auth module.
| impl VssError { | ||
| /// Create new instance of `VssError` | ||
| pub fn new(status: StatusCode, payload: Bytes) -> VssError { | ||
| pub fn new(status_code: i32, payload: Vec<u8>) -> VssError { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: we don't need ownership and can pass a &[u8] here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, but if we don't end up cloneing at the callsite, isn't move semantics even preferable to passing a reference?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm I have always had the habit of not asking for ownership if I don't need it. I wouldn't want someone to clone later down the line to accommodate the Vec<u8> here, forgetting to look inside here and realize cloning was not necessary.
isn't move semantics even preferable to passing a reference?
Do you mean that once we receive an error status code, the payload should be owned by VssError, and any subsequent interactions with that payload should happen through the VssError API ?
| let auth_response = | ||
| auth_request.send_async().await.map_err(VssHeaderProviderError::from)?; | ||
| let lnurl_auth_response: LnurlAuthResponse = | ||
| serde_json::from_slice(&auth_response.into_bytes()).map_err(|e| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tnull sounds to me like this is something we can upstream to bitreq no ? ie pass the Vec<u8> body straight into serde_json::from_slice, and skip the utf8 parsing, which seems a net plus.
| .map_err(VssHeaderProviderError::from)?; | ||
| let auth_request = bitreq::get(&signed_lnurl) | ||
| .with_headers(self.default_headers.clone()) | ||
| .with_timeout(DEFAULT_TIMEOUT_SECS); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd put a 16KiB limit here too, this should be a short response.
Replace the
reqwestdependency withbitreq, co-Authored-By: HAL 9000.