Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Security issue #248

Open
lchenneberg opened this issue Mar 3, 2024 · 1 comment
Open

Security issue #248

lchenneberg opened this issue Mar 3, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@lchenneberg
Copy link

I've spent 2 days digging the code. I really like the project. You really did a good job and the implementation with langgraph is a pain killer.

There is a potential serious security issue. You may not delegate the responsibility to the client to create tools.

I thing the right way to go is to delegate the responsibility to declare the tooling to the server and refer to it frontend side by an identifier. I guess at this stage, it is not a lot of code to change.

you need to rethink, useCopilotAction (useMakeCopilotActionable is already deprecated and may be deleted asap)

The issue with the current approach is that you enable GOD mode to an attacker. I would be able to modify the prompt client side and touch the endpoint. As we don't know how the backend will be implemented, maybe langchain or langgraph agents with access to databases or apis. I would be able to collect information that i'm not supposed to.

By the way the project is nice, thank you for the hard work.

@lchenneberg lchenneberg added the bug Something isn't working label Mar 3, 2024
@ataibarkai
Copy link
Collaborator

ataibarkai commented Mar 5, 2024

Hi @lchenneberg - thanks so much for writing in.

I want to make sure I understand the concern you're bringing up. Is the following a correct characterization?

an attacker with access to the frontend application, could modify the prompts usually passed by the frontend to the backend, in such a way that it would cause arbitrary backend-defined actions/chain to be processed, and to leak information back to the frontend.

Is that correct? Assuming that's the concern, reply below.

But I'd also love to jump on a quick call with you to make sure I understand the concern correctly, and also to hear what you're building! If any time here works, please feel free to book.


  1. First, you are right, that is indeed a concern programmers should be aware of.
    Currently we encourage developers to view backend-defined actions/chains similarly to backend-defined REST endpoints: their execution and return values should be viewed as untrusted. However this is not communicated sufficiently clearly in the documentation. We will correct that.

  2. You are right that it doesn't have to be this way.
    In principle we can draw clearer security boundaries such that untrusted actions & chains can be defined and run server-side. The first version of this will be shipped over the next 1-2 months.

If you're interested, we'd love to include you in the design, or the review phases of this feature. Sounds like you have great security mindset. Let me know.


Separately: if you want to share/tell us what you are building with LangGraph we are highly interested what our users are building! Again, happy to jump on a call, or you can also share on the showcase channel on our discord.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants