Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix Issue #257 by including additional details on Kube Service routing #3065

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

jkneubuh
Copy link
Contributor

@jkneubuh jkneubuh commented Nov 19, 2021

Signed-off-by: Josh Kneubuhl [email protected]

Type of change

  • Documentation update

Description

This PR adds a brief summary of Kubernetes Service routing as a mechanism to implement basic HA / DR / connection load balancing to the Fabric Gateway.

Additional details

A sample integration of the Kube routing is available in the Kubernetes Test Network HA GUIDE

Kube test network updates to enable gateway client load balancing are available in fabric-samples PR #532

Related issues

Additional discussion, and the original issue, is tracked at fabric-gateway issue #257

Resolves #257

@jkneubuh jkneubuh requested review from a team as code owners November 19, 2021 19:15
@denyeart
Copy link
Contributor

denyeart commented Nov 19, 2021

This is great information and I am primarily interested in getting the content captured somewhere/anywhere for now. It does go deeper into Kubernetes than I was expecting, that's not a bad thing, but it does beg the question of how we want to evolve the docs to be more aligned with Kubernetes going forward. We could inject such content throughout the docs as done here, or we could keep the load balancing concepts fairly generic in the core Fabric topics, and talk about Kubernetes specifics somewhere else like in the Deployment Guide topics. Again, without a wider Kubernetes doc strategy in place yet, I'm happy to keep the content here for now, and then potentially shift it to somewhere like Deployment Guide later once we have the wider Kubernetes doc strategy. Thought we could at least start the discussion here though.

Let's see if documentation specialists such as @joshhus and @denali49 have any thoughts on the topic.

And we probably want @mbwhite to provide a technical review as well.

## Gateway Service Routing with Kubernetes

In typical Fabric deployments on Kubernetes, each peer node is exposed via a single Kubernetes `Service` instance and resolved using a Kube DNS. This technique is sufficient for the Fabric Gateway and application clients to resolve individual peers within a Fabric network. In cases requiring high-availability and/or client connection load-balancing, Kubernetes can be configured with an additional `Service` resource bound to multiple gateway peers. Using an HA topology, gateway clients may reference a set of peer nodes with a single DNS alias.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

an observation: is it worth pushing the gateway client strongly here? a) make them well known. but also b) this approach works due to the way gateway has a single point of (network) contact with Fabric. Really not sure this would well with previous SDKs?


An example load-balanced gateway service is available in the [Kubernetes Test Network](https://github.com/hyperledger/fabric-samples/blob/main/test-network-k8s/docs/HIGH_AVAILABILITY.md). In this example:
- Each organization defines an `orgN-peer-gateway` `Service`, bound to a set of peer `Deployments`.
- The TLS enrollments / certificates for each peer include the shared service / Subject Alternate Name alias.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

... in addition to the peerX.orgY hostname

@jkneubuh jkneubuh marked this pull request as draft December 16, 2021 17:05
@jkneubuh
Copy link
Contributor Author

Hi @denyeart and @joshhus - I did a little word smithing in here and moved the Kube-specific content out of the Gateway configuration guide. The section now hangs as a top-level sub-section under the Deployment Guide on read-the-docs.

I tried to emulate the multi-layer links using the deploypeer/*.md and deployorderer/*.md files as an example, but it doesn't sit right. For example, under "Creating a peer" there is a page of content, with a nav list at the bottom that links to the sub-pages under deploypeer/*.md. This doesn't seem like the right fit for the Kubernetes Considerations, which feels like it should be captured as a long-and-growing list of things to keep in mind when deploying to cloud. I.e.. the kube considerations seems like it should be a two-level nav on the left-navbar, rather than a three-level nav like deploypeer and deployorderer sections.

Dave is this a little closer to what you were envisioning?

@denyeart
Copy link
Contributor

denyeart commented Feb 1, 2023

@jkneubuh Going through old issues and PRs, I realized we need to finish this one out.

There is some good guidance now in https://github.com/hyperledger/fabric-samples/blob/main/full-stack-asset-transfer-guide/docs/ApplicationDev/01-FabricGateway.md#production-deployment-of-fabric-gateway.

We should determine how to best convey this information across Fabric docs and the full stack asset transfer docs.

@mbwhite
Copy link
Member

mbwhite commented Feb 6, 2023

@denyeart @jkneubuh

I've got some additional information that could be used to extend this as well.

@julian-trustgrid
Copy link

Hi guys,

Just chiming on as I'm having issues regarding load balancing into multiple deployment pods. I have created a kubernetes service, a client application which communicates to the kubernetes service via gRPC.

I'm tailing the logs of the kube service and noticed that only one peer is getting a request. Is it because the client application is maintaining the connection hence the request is not getting routed to other peers?

Thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants