-
-
Notifications
You must be signed in to change notification settings - Fork 214
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add upstream strategy random
#1221
Conversation
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main #1221 /- ##
==========================================
Coverage 93.70% 93.77% 0.06%
==========================================
Files 70 72 2
Lines 5687 5892 205
==========================================
Hits 5329 5525 196
- Misses 277 284 7
- Partials 81 83 2 ☔ View full report in Codecov by Sentry. |
20f06ec
to
ac4b80d
Compare
Looks promising. I'll look into it after I fix the parallel processing of unit tests. 🥴 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the idea and most of the changes are great. I wonder if we could make random a specific parallel best configuration: add an option (could just be in the code for now, not necessarily user facing) of how many resolvers to race.
Then random would just set that to 1 :)
Good idea, I'll look into it :) |
ac4b80d
to
4fe067d
Compare
I've now implemented the I'm not 100% happy with the implementation and it still feels a bit rough. After my changes the random selection is a little bit different than before (I needed to increase the blocky/resolver/parallel_best_resolver_test.go Lines 292 to 294 in 0e74a3a
|
0e74a3a
to
3f54720
Compare
Sorry for the delay, I missed the original notification. I'll take another look today :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
General state is pretty good, looking forward to being able to add a config for how many resolver to query in parallel!
Regarding the err check in `resolve()` of `parallel_best_resolver.go` I'm not 100% certain but I don't think the upstream resolver returns err context canceled at all, there are also not tests for it.
c25d271
to
61d2f1d
Compare
- maps.Keys() - docs - parallel_best resolver String() - remove expensive & unnecessary log - return err from retry failed - pre-alloc pickRandom chosenResolvers - fix context comment - upstreamResolverStatus.resolve(): add resolver to err
61d2f1d
to
deee9dc
Compare
resolversPerClient
removal & add upstream strategy random
random
Did some tiny changes before merging to remove the TODO in the tests and fix a race because of reading and writing Thanks for all the work on this! |
Thank you for reviewing the PR 😄 |
If you want these changes in separate PRs I'll do so 🙂
This PR includes multiple changes:
resolversPerClient
struct used in all upstream group/branch resolvers and replaced by a separategroupName
string and a flatupstreamResolverStatus
slice.NewXYZResolver()
func now accept the newUpstreamGroupConfig
struct instead ofUpstreamsConfig
.upstreamResolverStatus
slice in the map (Add upstream strategystrict
#1093 implemented a "tree" like architecture which resulted in only one slice existing in the map).removing theEdit: we keep the check to not miss it when we will pass a context tocontext.Canceled
check for theparallel_best
resolver. I'm not 100% certain but from what I can tell theupstream_resolver
Resolve()
won't ever returncontext.Canceled
.upstream_resolver
Resolve()
func.random
upstream strategy 🎉parallel_best
without the race. If the first randomly selected upstream fails a second will be asked.There are also a few things we could discuss or create issues for:
Passing a context into
upstream_resolver
Resolve()
to allow cancellation of theretry.Do
if e.g. a timeout is reached or the other resolver won the race (forparallel_best
).retry.Do
func will still retry any failed calls.In Add upstream strategy
strict
#1093 and also in the PR I've used theConfig.Upstreams.Timeout
to "fail" early and continue with the "try next resolver" logic even thoughretry.Do
might be able to successfully get an answer on one of the two remaining retries.retry.Do
or multiply the timeout for the contexts I've set by 3 to only continue with the next resolvers if all retries failed.The structs returned by
NewXYZResolver
all embed theConfigurable
interface and I'm not 100% sure if this is still needed/useful because the only configuration they currently have is the group name and theUpstreams
slice.Thanks for maintaining blocky ❤️