Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OPEA - AmazonBedrock integration #1031

Closed
wants to merge 86 commits into from

Conversation

Vihanth
Copy link

@Vihanth Vihanth commented Dec 13, 2024

Description

This PR adds Amazon Bedrock LLMs to OPEA text-generation.

Issues

n/a

Type of change

List the type of change like below. Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds new functionality)
  • Breaking change (fix or feature that would break existing design and interface)
  • Others (enhancement, documentation, validation, etc.)

Tests

Tested it by building docker image and deploying it using docker compose.

@joshuayao joshuayao linked an issue Jan 7, 2025 that may be closed by this pull request
@joshuayao joshuayao added this to the v1.2 milestone Jan 7, 2025
@joshuayao
Copy link
Collaborator

Hi @Vihanth, The comps/llms/text-generation module has been refactored for v1.2. Could you please update your code to align with these changes? Thanks.
p.s.: OPEA v1.2 plan to code freeze at WW3.

@joshuayao
Copy link
Collaborator

Hi @Vihanth , OPEA is approaching its code freeze. Could you please check the CI failures?

@smguggen
Copy link
Contributor

@joshuayao Sorry about the delay we had a bit of a staffing shake up, we have someone on this again. How much time do we have before the code freeze?

@srinarayan-srikanthan
Copy link
Collaborator

hi @smguggen , the code freeze is on Friday. Also it seems to be a CI issue of missing credential. So please feel free if you need any help in resolving the issue/merging this PR.

@smguggen
Copy link
Contributor

@srinarayan-srikanthan That's exactly what was causing the delay, our security reviewer was not a fan of storing long-lasting credentials to perform the test, so we set up OIDC. I think we can get that together today, so everything should be handled well before Friday. Thanks!

@chensuyue
Copy link
Collaborator

I can help to add the new secrets for CI. But my concern is this PR is not adapt to the new code structure. The comps/llms/text-generation will be fully removed, and instead there is https://github.com/opea-project/GenAIComps/tree/main/comps/llms/src/text-generation/integrations.

@smguggen
Copy link
Contributor

smguggen commented Jan 16, 2025

@chensuyue Those changes are underway, can you share the name of the key for the secret? The secret itself was shared with @srinarayan-srikanthan

@jonminkin97
Copy link
Contributor

@chensuyue We will need to use the secret key to get AWS credentials. Once we have the secret key, we will need to update 1165 to use that secret key, and ensure that PR is merged prior to this one.

@chensuyue
Copy link
Collaborator

@chensuyue We will need to use the secret key to get AWS credentials. Once we have the secret key, we will need to update 1165 to use that secret key, and ensure that PR is merged prior to this one.

srikanthan send me the AWS_IAM_ROLE_ARN, and I have added it into the secrets. But how about those 3 parameter, seems the test also need them.
-e AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
-e AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
-e AWS_SESSION_TOKEN=${AWS_SESSION_TOKEN} \

@jonminkin97
Copy link
Contributor

jonminkin97 commented Jan 17, 2025

@chensuyue We will need to use the secret key to get AWS credentials. Once we have the secret key, we will need to update 1165 to use that secret key, and ensure that PR is merged prior to this one.

srikanthan send me the AWS_IAM_ROLE_ARN, and I have added it into the secrets. But how about those 3 parameter, seems the test also need them. -e AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} -e AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} -e AWS_SESSION_TOKEN=${AWS_SESSION_TOKEN} \

The AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION token are automatically set by the "configure-aws-credentials" action.

That action needs an IAM Role ARN as input, and sets those three environment variables for future steps. See the documentation for configure-aws-credentials.

@jonminkin97
Copy link
Contributor

@chensuyue Hopefully the above comment clarifies your question and the question on 1165. The 1165 pull request must be merged so that the Workflow checks in this PR can pass. My comment above should address the comment on that PR as well.

@jonminkin97 jonminkin97 requested a review from ZePan110 as a code owner January 17, 2025 21:25
@chensuyue
Copy link
Collaborator

@chensuyue Hopefully the above comment clarifies your question and the question on 1165. The 1165 pull request must be merged so that the Workflow checks in this PR can pass. My comment above should address the comment on that PR as well.

This PR has been merged. But look at the CI, there is another issue need to resolve.

@chensuyue
Copy link
Collaborator

Regarding to the pre-commit.ci, the fix is not commit to your repo directly, so you need to run it locally to fix the issue.

pip install pre-commit
pre-commit install
pre-commit run --all-files

@letonghan
Copy link
Collaborator

letonghan commented Jan 20, 2025

Hi @jonminkin97 , please delete the comps/llms/text-generation folder, and retain the comps/llms/src folder only.

This is for the new file structure refactoring of this release. All of the llm-related components will be integrated into opea_llm_microservice.py, and other files outside src folder will be all deleted. Please refer to the new file structure here.
Thanks for your work!

@chensuyue chensuyue changed the base branch from main to pre-ci January 20, 2025 13:37
@chensuyue
Copy link
Collaborator

Switch the target branch for test. After the test pass, we can switch back to main branch.

@chensuyue
Copy link
Collaborator

I have setting the permission in pre-ci branch, but test still failed, 690363d

@jonminkin97 jonminkin97 requested a review from XinyaoWa as a code owner January 22, 2025 18:39
minmin-intel and others added 4 commits January 23, 2025 01:59
* initial code for sql agent llama

Signed-off-by: minmin-intel <[email protected]>

* add test for sql agent

Signed-off-by: minmin-intel <[email protected]>

* update sql agent test

Signed-off-by: minmin-intel <[email protected]>

* fix bugs and use vllm to test sql agent

Signed-off-by: minmin-intel <[email protected]>

* add tag-bench test and google search tool

Signed-off-by: minmin-intel <[email protected]>

* test sql agent with hints

Signed-off-by: minmin-intel <[email protected]>

* fix bugs for sql agent with hints and update test

Signed-off-by: minmin-intel <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add readme for sql agent and fix ci bugs

Signed-off-by: minmin-intel <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add sql agent using openai models

Signed-off-by: minmin-intel <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix bugs in sql agent openai

Signed-off-by: minmin-intel <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* make wait time longer for sql agent microservice to be ready

Signed-off-by: minmin-intel <[email protected]>

* update readme

Signed-off-by: minmin-intel <[email protected]>

* fix test bug

Signed-off-by: minmin-intel <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* skip planexec with vllm due to vllm-gaudi bug

Signed-off-by: minmin-intel <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* debug ut issue

Signed-off-by: minmin-intel <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* use vllm for all uts

Signed-off-by: minmin-intel <[email protected]>

* debug ci issue

Signed-off-by: minmin-intel <[email protected]>

* change vllm port

Signed-off-by: minmin-intel <[email protected]>

* update ut

Signed-off-by: minmin-intel <[email protected]>

* remove tgi server

Signed-off-by: minmin-intel <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* align vllm port

Signed-off-by: minmin-intel <[email protected]>

---------

Signed-off-by: minmin-intel <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

opea bedrock integration

Signed-off-by: vihanth sura <[email protected]>
* remove examples gateway.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* remove gateway.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* refine service code.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update http_service.py

* remove gateway ut.

* remove gateway ut.

* fix conflict service name.

* Update http_service.py

* add handle message ut.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* remove `multiprocessing.Process` start server code.

* fix ut.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* remove multiprocessing and enhance ut for coverage.

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: chen, suyue <[email protected]>
* vllm support openai API

Signed-off-by: Xinyao Wang <[email protected]>

* fix bug

Signed-off-by: Xinyao Wang <[email protected]>

* fix bug

Signed-off-by: Xinyao Wang <[email protected]>

* test_llms_text-generation_vllm_langchain_on_intel_hpu.sh

Signed-off-by: Xinyao Wang <[email protected]>

* fix time

Signed-off-by: Xinyao Wang <[email protected]>

* fix bug

Signed-off-by: Xinyao Wang <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix bug

Signed-off-by: Xinyao Wang <[email protected]>

---------

Signed-off-by: Xinyao Wang <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
lvliang-intel and others added 18 commits January 23, 2025 02:04
* Disable telemetry by default

Signed-off-by: lvliang-intel <[email protected]>
Refine retrievers Dockerfile and requirements.txt and move --extra-index-url into Dockerfile for CPU Docker image.

Signed-off-by: letonghan <[email protected]>
Signed-off-by: Jonathan Minkin <[email protected]>

Add id-token permissions for workflow

Signed-off-by: Jonathan Minkin <[email protected]>
* Fix test_telemetry.py import

Signed-off-by: Abolfazl Shahbazi <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: Abolfazl Shahbazi <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* Enable AWS access when testing microservices

Signed-off-by: Jonathan Minkin <[email protected]>

* Add id-token permissions to workflow to allow OIDC auth

Signed-off-by: Jonathan Minkin <[email protected]>

---------

Signed-off-by: Jonathan Minkin <[email protected]>
)" (opea-project#1173)

This reverts commit b0abbde to recover the CI test. 

Signed-off-by: chensuyue <[email protected]>
Issue: When property graph store gets filled (~12K nodes, 15K relationships) insertion time in dataprep gets slow.
Extraction + insertion starts at ~30 sec and once it gets filled grows to (~12K nodes, 15K relationships) ~800 sec
Perf bottleneck this cypher call in llama-index to do node upsert:
https://github.com/run-llama/llama_index/blob/795bebc2bad31db51b854a5c062bedca42397630/llama-index-integrations/graph_stores/llama-index-graph-stores-neo4j/llama_index/graph_stores/neo4j/neo4j_property_graph.py#L334

Performance optimizations in this PR:

1. Move neo4j GraphStore initialization out of detaprep and retrieve function so it's only performed once at the begining
2. Disable schema_refresh of neo4j graph when not necessary because for large graph this is very slow.
3. Switch to OpenAILike class from llama-index to work with vllm or tgi endpoints without code changes (only docker compose.yaml changes)
4. Added concurrency and batching for generating community summaries and generating answers from summaries

---------

Signed-off-by: Rita Brugarolas <[email protected]>
According to the RFC's Phase 2 plan, this PR adds image query support, PDF ingestion support, and dynamic ports to the microservices used by MultimodalQnA. This PR goes with this one in GenAIExamples.

Signed-off-by: dmsuehir <[email protected]>
Signed-off-by: Melanie Buehler <[email protected]>
The original output was unclear, this optimization ensures that the user can find the source of the path check CI failure.

Signed-off-by: ZePan110 <[email protected]>
Enable redis for saving agent_config, messages:

1. agent recovery from redis (agent_config)
2. assemble history for multi-turn

related opea-project#977
* Add helm-chart CI test workflow.

---------
Signed-off-by: ZePan110 <[email protected]>
Refactor dataprep microservice, including opensearch, elasticsearch, pinecone, pgvector, neo4j, qdrant, vdms.

Signed-off-by: lvliang-intel <[email protected]>
Co-authored-by: chen, suyue <[email protected]>
Fix the issue where CD test files cannot be obtained and add GOOGLE secrets.

Signed-off-by: ZePan110 <[email protected]>
Enhance the bug & feature template according to the issue opea-project/GenAIExamples#1002.

Co-authored-by: ZhangJianyu <[email protected]>
1. Fix CD workflow type error.
2. Change dockerhub default value and add the judgment that matrix is not empty to fix workflow errors.

Signed-off-by: ZePan110 <[email protected]>
* delete intent detection

Remove comps that use the old code structure

Signed-off-by: Jonathan Minkin <[email protected]>

Trigger new run of test

Re-trigger new run of test
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature] AWS Bedrock endpiont