Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(ValidationException) when calling the InvokeModel operation: Malformed input request: expected minLength: 1, actual: 0, please reformat your input and try again #279

Open
ajaylamba-provar opened this issue Dec 19, 2023 · 8 comments
Labels
enhancement New feature or request

Comments

@ajaylamba-provar
Copy link
Contributor

ajaylamba-provar commented Dec 19, 2023

Getting below error sometimes in continued conversations.

"message":"All records failed processing. 1 individual errors logged separately below.\n\nTraceback (most recent call last):\n File \"/opt/python/aws_lambda_powertools/utilities/batch/base.py\", line 500, in _process_record\n result = self.handler(record=data)\n ^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/python/aws_lambda_powertools/tracing/tracer.py\", line 678, in decorate\n response = method(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"/var/task/index.py\", line 122, in record_handler\n handle_run(detail)\n File \"/var/task/index.py\", line 95, in handle_run\n response = model.run(\n ^^^^^^^^^^\n File \"/var/task/adapters/base/base.py\", line 170, in run\n return self.run_with_chain(prompt, workspace_id)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/var/task/adapters/base/base.py\", line 107, in run_with_chain\n result = conversation({\"question\": user_prompt})\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/python/langchain/chains/base.py\", line 310, in __call__\n raise e\n File \"/opt/python/langchain/chains/base.py\", line 304, in __call__\n self._call(inputs, run_manager=run_manager)\n File \"/opt/python/langchain/chains/conversational_retrieval/base.py\", line 148, in _call\n docs = self._get_docs(new_question, inputs, run_manager=_run_manager)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/python/langchain/chains/conversational_retrieval/base.py\", line 305, in _get_docs\n docs = self.retriever.get_relevant_documents(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/python/langchain/schema/retriever.py\", line 211, in get_relevant_documents\n raise e\n File \"/opt/python/langchain/schema/retriever.py\", line 204, in get_relevant_documents\n result = self._get_relevant_documents(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/python/genai_core/langchain/workspace_retriever.py\", line 13, in _get_relevant_documents\n result = genai_core.semantic_search.semantic_search(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/python/genai_core/semantic_search.py\", line 25, in semantic_search\n return query_workspace_open_search(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/python/genai_core/opensearch/query.py\", line 48, in query_workspace_open_search\n query_embeddings = genai_core.embeddings.generate_embeddings(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/python/genai_core/embeddings.py\", line 28, in generate_embeddings\n ret_value.extend(_generate_embeddings_bedrock(model, batch))\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/python/genai_core/embeddings.py\", line 88, in _generate_embeddings_bedrock\n response = bedrock.invoke_model(\n ^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/python/botocore/client.py\", line 535, in _api_call\n return self._make_api_call(operation_name, kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/python/aws_xray_sdk/ext/botocore/patch.py\", line 38, in _xray_traced_botocore\n return xray_recorder.record_subsegment(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/python/aws_xray_sdk/core/recorder.py\", line 456, in record_subsegment\n return_value = wrapped(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/python/botocore/client.py\", line 983, in _make_api_call\n raise error_class(parsed_response, operation_name)\nbotocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: expected minLength: 1, actual: 0, please reformat your input and try again.\n",

Could be related to get_condense_question_prompt where questions are rephrased.

`def get_condense_question_prompt(self):
    template = """
        Human: Given the following conversation and a follow up input, rephrase the follow up question to be a standalone question. The question should be rephrased such that the rephrased question structure remains as if Human has asked it.
        If the followup input is just a feedback and not actually a question, just take the input and rephrase it as a feedback not a question.
        
        Chat History:
        {chat_history}
        Follow Up Input: {question}
        Standalone question:"""

    return PromptTemplate(
        input_variables=["chat_history", "question"],
        chat_history="{chat_history}",
        template=template,
    )`

I am using Claude 2.1 for responses and Amazon Titan for embeddings and OpenSearch for storage.

@ajaylamba-provar
Copy link
Contributor Author

Following is the example of conversation:

image

@massi-ang massi-ang added the enhancement New feature or request label Dec 19, 2023
@massi-ang
Copy link
Collaborator

massi-ang commented Dec 19, 2023

Can you provide the prompts? You'll find them by enabling metadata, using the settings gear in the UI bottom right

@ajaylamba-provar
Copy link
Contributor Author

The metadata is empty when the error is thrown:

image

Also, see the following conversation, for one input, it throws error but when the input is slightly changed, it gives response.

image

@ajaylamba-provar
Copy link
Contributor Author

I have noticed that the issue occurs when the followup question is very short e.g. thanks, bye, ok bye etc.

@ajaylamba-provar
Copy link
Contributor Author

ajaylamba-provar commented Dec 19, 2023

I am able to reduce the occurrence of this by using following code for rephrasing the question

    ```
def get_condense_question_prompt(self):
        template = """
            Your task is to rephrase the follow up input, keeping in mind the current conversation.
            The question should be rephrased such that the rephrased question structure remains as if Human has asked it.
            If the followup input is just a feedback and not actually a question, just take the input and rephrase it as a feedback.
            The rephrased standalone question/feedback MUST NOT be empty.
            
            
            Human: Here is the current conversation so far

            {chat_history}
            
            Given the above conversation and a follow up input, rephrase the follow up input to be a standalone question/feedback. 
            
            Follow Up Input: {question}
            
            
            Standalone question:"""

        return PromptTemplate(
            input_variables=["chat_history", "question"],
            chat_history="{chat_history}",
            template=template,
        )

@ajaylamba-provar
Copy link
Contributor Author

I have started getting this issue again with increased frequency now. Any suggestions regarding the fix please?
@massi-ang @bigadsoleiman

@roselle11111
Copy link

@ajaylamba-provar i am getting the same issue using this condensed prompt do you have any workaround?

@ajaylamba-provar
Copy link
Contributor Author

@roselle11111 The workaround I applied was to fine tune the condensation prompt to make sure that the rephrased question is never blank.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Status: No status
Development

No branches or pull requests

3 participants