Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Llama 3.1] Currently only named tools are supported error when using it for llama 3.1 when not setting toolChoice or using toolChoice: 'auto' #2503

Open
vishalsaugat opened this issue Jul 31, 2024 · 10 comments
Labels
bug Something isn't working docs Improvements or additions to documentation

Comments

@vishalsaugat
Copy link

Description

I am using llama3.1 model using @ai-sdk/openai package.
I get this error

{"object":"error","message":"[{'type': 'value_error', 'loc': ('body',), 'msg': 'Value error, Currently only named tools are supported.',.........

when using 'toolchoice': 'auto' in streamUI or not set it at all.
On reading further I found out that llama doesnt support "tool_choice": "auto" parameter in its API. If its set to null it automatically gets converted to auto.
https://github.com/vercel/ai/blob/f7a94535f1d8b8a6f4179d7f5cd762389ef6de4b/packages/core/core/prompt/prepare-tools-and-tool-choice.ts#L37C21-L37C25

Solution: - support null as parameter which doesnt send tool-choice to Llama or some other models API

Code example

{"object":"error","message":"[{'type': 'value_error', 'loc': ('body',), 'msg': 'Value error, Currently only named tools are supported.',
```

### Additional context

_No response_
@vishalsaugat vishalsaugat changed the title [Llama 3.1] Currently only named tools are supported error when using it for llama 3.1 when using toolChoice: 'auto' [Llama 3.1] Currently only named tools are supported error when using it for llama 3.1 when not setting toolChoice or using toolChoice: 'auto' Jul 31, 2024
@lgrammel
Copy link
Collaborator

lgrammel commented Aug 1, 2024

Afaik OpenAI does not offer Llama 3.1. The error you see comes from a specific provider. Which provider are you using?

@vishalsaugat
Copy link
Author

I am using the steps provided on vercel sdk page

'use server';

import { streamUI, tool } from 'ai/rsc';
import { createOpenAI as createGroq } from '@ai-sdk/openai';
import { z } from 'zod';

const groq = createGroq({
  baseURL: 'https://api.groq.com/openai/v1',
  apiKey: process.env.GROQ_API_KEY,
});

export async function streamComponent() {
  const result = await streamUI({
    model: groq('llama-3.1-70b-versatile'),
    prompt: 'Get the weather for San Francisco',
    text: ({ content }) => <div>{content}</div>,
    tools: {
      getWeather: tool({
        description: 'Get the weather for a location',
        parameters: z.object({ location: z.string() }),
        generate: async function* ({ location }) {
          yield <div>loading...</div>;
          const weather = '25c'; // await getWeather(location);
          return (
            <div>
              the weather in {location} is {weather}.
            </div>
          );
        },
      }),
    },
  });
  return result.value;
}

This should work according to documentation but it doesnt because of the toolChoice parameter
Reference - https://sdk.vercel.ai/docs/guides/llama-3_1

@lgrammel
Copy link
Collaborator

lgrammel commented Aug 1, 2024

@vishalsaugat can you try removing the tool function call, see #2513

@lgrammel lgrammel added bug Something isn't working docs Improvements or additions to documentation labels Aug 1, 2024
@vishalsaugat
Copy link
Author

@lgrammel Yes I have tried that too, dint work.
Essentially, there should be a value for toolChoice like toolChoice: 'donotset' which dont pass tool_choice parameter while calling the model API. Currently it passes either 'auto' or 'none' or function as object

@lgrammel
Copy link
Collaborator

lgrammel commented Aug 1, 2024

@vishalsaugat I believe this might be caused by some changes on the groq side. I've reached out to them. Can you try the 8b model instead?

@nicoalbanese
Copy link
Contributor

Hey @vishalsaugat - would you be able to upload your repository so I can take a look? I've just run all of the code snippets from the guide again (including the streamUI one mentioned) and it's working as expected.

@vishalsaugat
Copy link
Author

Ok, so I guess its because groq is handling this on its own side and not sending tool_choice: auto further to llama 3.1 . I am using llama 3.1 via azure and seems like its not omitting tool_choice.

@vishalsaugat
Copy link
Author

vishalsaugat commented Aug 1, 2024

This is my code

result = await streamUI({
      model: getModelFunction(model),
      initial: <SpinnerMessage model={model} />,
      maxTokens: getMaxTokens(model),
      system: getPrompt(systemPrompt),
      messages: [
        ...aiState.get().messages.map((message: any) => ({
          role: message.role,
          content: message.content,
          name: message.name,
          model: message.model
        }))
      ],
      text: ({ content, done, delta }) => {
        if (!textStream) {
          textStream = createStreamableValue('')
          textNode = <BotMessage model={model} content={textStream.value} />
        }

        if (done) {
          textStream.done()
          aiState.done({
            ...aiState.get(),
            messages: [
              ...aiState.get().messages,
              {
                id: nanoid(),
                role: 'assistant',
                content,
                model
              }
            ]
          })
        } else {
          textStream.update(delta)
        }
        return textNode
      },
      tools: {
        generateImage: {
          description: 'Generate an image based on the user message.',
          parameters: z.object({
            text: z.string().describe('A detailed description of the image to be generated.')
          }),
          generate: async function* ({ text }: { text: string }) {
            yield (
              <BotCard model={model}>
                {spinner}
              </BotCard>
            )

            await sleep(1000)
            const toolCallId = nanoid()

            let result: ImageResult[] = await generateImage({
              prompt: text,
            });


            aiState.done({
              ...aiState.get(),
              messages: [
                ...aiState.get().messages,
                {
                  id: nanoid(),
                  role: 'assistant',
                  content: [
                    {
                      type: 'tool-call',
                      toolName: 'generateImage',
                      toolCallId,
                      args: { text }
                    }
                  ],
                  model
                },
                {
                  id: nanoid(),
                  role: 'tool',
                  content: [
                    {
                      type: 'tool-result',
                      toolName: 'generateImage',
                      toolCallId,
                      result
                    }
                  ],
                  model
                }
              ]
            })
            return (
              <BotMessageImage result={result} model={model} />
            )
          },
        }
      }

    })
  } catch (error: any) {
    try {
      result = {
        value: error.responseBody ? JSON.parse(error.responseBody).error.message : error.message
      }
    } catch (err: any) {
      result = {
        value: (<h1>{error.message}</h1>)
      }
    }
  }
  return {
    id: nanoid(),
    display: result.value
  }
}

This code works if I also pass toolChoice 'none' but when I pass toolChoice 'auto' or dont pass toolChoice at all, it fails and give error which I pasted in my first message

@vishalsaugat
Copy link
Author

@lgrammel It will work with Llama 8b. Is it possible to add the support for above mentioned ?

@vishalsaugat
Copy link
Author

UPDATE: same error with llama8b also

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working docs Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

3 participants