Skip to content

Conversation

@ayman3000
Copy link

Problem:
The current InMemoryMemoryService uses simple keyword matching to search agent memories. This means:

  • Synonyms don't work ("happy" won't match "joyful")
  • Context is lost (keyword matching ignores semantic meaning)
  • No persistence (all memories are lost on restart)

The VertexAiRagMemoryService provides semantic search, but requires Google Cloud Platform infrastructure, which isn't suitable for local development or self-hosted deployments.

Solution:
Add a new ChromaMemoryService that provides semantic search capabilities using ChromaDB with pluggable embedding providers. This gives developers:

  • Semantic similarity search using vector embeddings
  • Local/self-hosted operation with no cloud dependency
  • Persistence to disk for long-term storage
  • Pluggable architecture supporting Ollama (included), OpenAI, or custom embedding providers

Testing Plan

Unit Tests:

  • I have added or updated unit tests for my change.
  • All unit tests pass locally.
tests/unittests/memory/test_chroma_memory_service.py::test_add_session_to_memory PASSED
tests/unittests/memory/test_chroma_memory_service.py::test_add_session_with_no_events_to_memory PASSED
tests/unittests/memory/test_chroma_memory_service.py::test_search_memory_returns_results PASSED
tests/unittests/memory/test_chroma_memory_service.py::test_search_memory_no_match PASSED
tests/unittests/memory/test_chroma_memory_service.py::test_search_memory_is_scoped_by_user PASSED
tests/unittests/memory/test_chroma_memory_service.py::test_upsert_updates_existing_documents PASSED

======================== 6 passed ========================

Manual End-to-End (E2E) Tests:

To manually test:

  1. Start Ollama server:

    ollama serve
    ollama pull nomic-embed-text
  2. Run the example:

    cd contributing/samples/memory_chroma
    python main.py
  3. Expected behavior:

    • Session 1 creates memories (user name, hobbies, food preferences)
    • Memories are saved to ChromaDB with embeddings
    • Session 2 queries memories semantically
    • Agent correctly recalls "badminton" for hobbies and "burger" for food

Checklist

  • I have read the CONTRIBUTING.md document.
  • I have performed a self-review of my own code.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have added tests that prove my fix is effective or that my feature works.
  • New and existing unit tests pass locally with my changes.
  • I have manually tested my changes end-to-end.
  • Any dependent changes have been merged and published in downstream modules.

Additional context

Files Added:

  • src/google/adk/memory/chroma_memory_service.py - Main memory service
  • src/google/adk/memory/embeddings/base_embedding_provider.py - Abstract base class
  • src/google/adk/memory/embeddings/ollama_embedding_provider.py - Ollama integration
  • tests/unittests/memory/test_chroma_memory_service.py - Unit tests
  • contributing/samples/memory_chroma/ - Example project

Files Modified:

  • pyproject.toml - Added optional chroma dependency
  • src/google/adk/memory/__init__.py - Added exports

Usage Example:

from google.adk.memory import ChromaMemoryService, OllamaEmbeddingProvider

embedding_provider = OllamaEmbeddingProvider(model="nomic-embed-text")
memory = ChromaMemoryService(
    embedding_provider=embedding_provider,
    persist_directory="./memory_db"
)

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @ayman3000, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a robust ChromaMemoryService to significantly enhance the application's memory capabilities. It moves beyond simple keyword matching by implementing semantic search using vector embeddings, offering improved contextual understanding and synonym handling. A key benefit is the ability to persist memory to disk and operate locally, providing a flexible, self-hosted solution that avoids external cloud dependencies. This new service is designed with a pluggable architecture for embedding providers, exemplified by the included OllamaEmbeddingProvider, allowing developers to choose their preferred embedding models.

Highlights

  • New ChromaDB Memory Service: Introduces a ChromaMemoryService for advanced semantic memory search, addressing limitations of the existing InMemoryMemoryService by enabling contextual understanding and synonym handling.
  • Pluggable Embedding Providers: Supports various embedding providers through a new BaseEmbeddingProvider interface, with an OllamaEmbeddingProvider included for flexible integration with different models and local execution.
  • Persistent and Local Memory: Enables memory persistence to disk and local/self-hosted operation, removing cloud dependencies for development and deployment scenarios.
  • Enhanced Search Capabilities: Provides semantic similarity search, significantly improving memory recall with synonyms and contextual understanding compared to previous keyword-based matching.
  • Comprehensive Testing and Example: Includes new unit tests for the ChromaMemoryService and a detailed manual end-to-end testing example demonstrating its setup and usage with Ollama.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@adk-bot adk-bot added the services [Component] This issue is related to runtime services, e.g. sessions, memory, artifacts, etc label Jan 18, 2026
- Add ChromaMemoryService for semantic search over session memories
- Add BaseEmbeddingProvider abstract base class
- Add OllamaEmbeddingProvider using Ollama's /api/embed endpoint
- Add chromadb as optional dependency
- Add comprehensive unit tests
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a ChromaMemoryService to provide semantic memory search capabilities using ChromaDB, which is a great alternative to InMemoryMemoryService for local development and self-hosted deployments requiring persistence and semantic search. The implementation is well-structured, with a pluggable BaseEmbeddingProvider and an initial OllamaEmbeddingProvider. The addition of unit tests and a sample application is also excellent. My review includes a few suggestions to improve maintainability and consistency.

@ayman3000 ayman3000 force-pushed the feature/chroma-memory-service branch from 90c145b to c352247 Compare January 18, 2026 22:53
ayman3000 and others added 2 commits January 19, 2026 07:08
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@ryanaiagent ryanaiagent self-assigned this Jan 20, 2026
@ryanaiagent ryanaiagent added the request clarification [Status] The maintainer need clarification or more information from the author label Jan 22, 2026
@ryanaiagent
Copy link
Collaborator

ryanaiagent commented Jan 22, 2026

Hi @ayman3000 , Thank you for your contribution! We appreciate you taking the time to submit this pull request. Can you fix the formatting errors and address my comment i left before we can proceed with the review.

@ryanaiagent
Copy link
Collaborator

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a ChromaMemoryService for semantic memory search using ChromaDB, along with a pluggable embedding provider architecture demonstrated with an OllamaEmbeddingProvider. The changes are well-structured, including the core service, an embedding provider base class, an Ollama implementation, unit tests, and a sample application. My review focuses on ensuring correctness, maintainability, and robustness. I've found a critical issue in the Ollama provider's network request implementation that needs to be addressed, along with a couple of medium-severity suggestions to improve code quality and user experience.

Comment on lines 126 to 145
data = json.dumps(payload).encode("utf-8")
request = urllib.request.Request(
url,
data=data,
headers={"Content-Type": "application/json"},
method="POST",
)

try:
with urllib.request.urlopen(
request, timeout=self._request_timeout
) as response:
response_body = response.read().decode("utf-8")
except urllib.error.URLError as exc:
raise RuntimeError(f"Failed to connect to Ollama: {exc.reason}") from exc
except urllib.error.HTTPError as exc:
message = exc.read().decode("utf-8", errors="ignore")
raise RuntimeError(f"Ollama API error {exc.code}: {message}") from exc

return json.loads(response_body)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This block has a critical issue: it attempts to use urllib.request and urllib.error without importing the urllib module, which will cause a NameError at runtime.

Additionally, the requests library is imported at the top of the file but is not used.

I suggest replacing this entire block with an implementation that uses the requests library. This will fix the bug, make use of the existing import, and result in more concise and readable code.

    try:
      response = requests.post(
          url,
          json=payload,
          timeout=self._request_timeout,
      )
      response.raise_for_status()  # Raises an HTTPError for bad responses
      return response.json()
    except requests.exceptions.RequestException as exc:
      raise RuntimeError(f"Failed to connect to Ollama: {exc}") from exc

Returns:
A SearchMemoryResponse containing the matching memories.
"""
from google.genai import types
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency and to adhere to standard Python style (PEP 8), imports should be at the top of the file. Please move this import to the top-level imports section of the file.

Comment on lines 51 to 53
'chromadb is not installed. If you want to use the ChromaMemoryService'
' please install it with: pip install chromadb. If not, you can ignore'
' this warning.'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The error message suggests pip install chromadb, which is helpful. However, to ensure users install a version compatible with this library as defined in pyproject.toml, it's better to guide them to install the optional dependency group. This prevents potential version conflicts.

Suggested change
'chromadb is not installed. If you want to use the ChromaMemoryService'
' please install it with: pip install chromadb. If not, you can ignore'
' this warning.'
'chromadb is not installed. If you want to use the ChromaMemoryService'
' please install it with: pip install \'google-adk[chroma]\'. If not, you can'
' ignore this warning.'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

request clarification [Status] The maintainer need clarification or more information from the author services [Component] This issue is related to runtime services, e.g. sessions, memory, artifacts, etc

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants