feat(rust-client): Implement many API using batch endpoint#277
feat(rust-client): Implement many API using batch endpoint#277
Conversation
|
@sentry review |
Semver Impact of This PR🟡 Minor (new features) 📋 Changelog PreviewThis is how your changes will appear in the changelog. New Features ✨
Documentation 📚
🤖 This preview updates automatically when you update the PR. |
|
@sentry review |
3f49d54 to
d6dbeee
Compare
7f060c7 to
bf71d9c
Compare
| BatchOperation::Get { key, decompress } => Some((key.clone(), *decompress)), | ||
| _ => None, | ||
| }) | ||
| .collect(); |
There was a problem hiding this comment.
Duplicate GET operations lose individual decompress settings
Medium Severity
The decompress_map uses a HashMap keyed by ObjectKey, causing multiple GET operations on the same key to share a single decompress setting. When users add multiple GET requests for the same object with different decompress() values, only the last setting is retained. All responses for that key will incorrectly use the same decompression behavior, ignoring individual operation preferences and potentially returning data in the wrong format.
Additional Locations (1)
* origin/main: feat(gcs): Introduce retries (#279) build(deps): bump cryptography from 46.0.2 to 46.0.5 (#299) ref(service): Add metadata API, fix delete orphans, simplify BigTable backend (#298) ref(server): Add MeteredBody extractor and wrap_stream util (#293) meta(claude): Add default permissions for claude (#297) docs(clients): Restructure Rust and Python client READMEs (#294) ci: Add working directory to changelog-preview workflow (#295) feat(types): Add origin as built-in metadata field (#292) fix(metrics): Exclude health check endpoints from request metrics (#290) fix(service): Add backend tags to delete timing metric (#291) meta(ai): Add AGENTS file (#288) feat(killswitches): Add service filtering with x-downstream-service header (#287) build(deps): bump time from 0.3.44 to 0.3.47 (#285) meta(git): Ignore claude local settings (#286) # Conflicts: # clients/rust/README.md
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
| results.push(OperationResult::from_field(field, &decompress_map).await); | ||
| } | ||
|
|
||
| Ok(results) |
There was a problem hiding this comment.
Missing batch response cardinality check
Medium Severity
send_batch returns whatever parts were parsed without verifying that the count matches submitted operations. If the server or an intermediary returns a valid multipart response with missing parts, OperationResults becomes incomplete and callers can treat omitted operations as if they never failed.


Adds a
manyAPI that submits batch requests to the server.The API is called
manyto indicate the fact that this can be interpreted as just a "hint" to use batching, but amanyrequest from the user's pov doesn't necessarily map to a single batch request to the server.This is a nice API when creating the requests but not so nice when retrieving the results, as the user will need to deal with wrapper types and different error types as well.
It should be kept in mind that most of the usage we expect here is from
sentry-cliuploading multiple files. It's therefore quite unlikely that the user wants to inspect the result wrappers or anyways match on the errors, they most likely just care to know if all operations succeeded or not.Motivated by the above reasons:
error_for_failuresfunction that's useful if you just want to know if all ops succeeded or notClose FS-241