Mem0 configuration

Hello, as many others, I have struggled with AI going off the rails, repeating myself, copy/pasting code examples, etc.

I have thought about using mem0 to help with this:

Their docs state specifically that it can be used for coding preferences, code snippets, etc.

So, I think it’s a natural fit for Cursor, Cline, Windsurf, etc, AI-assisted coding.

What I am trying to undestand is, how/where would I configure it to separate memories from different development projects. Is there a way to separate them via configuration? I can’t imagine that I need to install a different server with different venv and different port for each coding project right?

I’m not a developer. I’m just learning. Any help appreciated.

Thanks!

1 Like

I found another MCP memory solution that seems very good! No idea how to use it though.

https://www.reddit.com/r/cursor/comments/1jn6ds5/comment/mkjsyta/

1 Like

Using Mem0/Zep with Graphiti is something I have on my list to look into! The main limitation you pointed out is what’s making me hesitant right now too - these MCP servers don’t seem to provide a way to separate memories between projects unless you create multiple instances of the MCP server which would be a big pain.

Could be worth making a custom MCP server for something like this! And I’m planning on putting out a video soon for creating a custom MCP server.

Hey Cole! I saw your new video and suspected I would have a reply here waiting! Sorry I didn’t see this before. I had found this version of the basic memory mcp that has a json knowledge-graph with customizable path. GitHub - itseasy21/mcp-knowledge-graph: MCP server enabling persistent memory for Claude through a local knowledge graph - fork focused on local development I tried to customize it to index markdown files by their headings. it works but still needs some workarounds (like for de-indexing a file). Even with AI, often I dont know what I’m doing. hehe.

But now I’m going with lightrag! I hope that you take the lightrag solution and use it for your ‘Building an MCP Server’ video so what we can use it inside of VS Code. One usecase I have is this:

  • I purchased a whitelabel app from codecanyon
  • I made several customizations
  • the original developer releases upstream updates
  • I need to reapply my customizations to the update

So many things can break in the process, it’s a large app with several directories, so I hope that lightrag can help me make sure that the AI keeps track of everything that gets affected with every change. But it needs to be an MCP tool that can be called by RooCode (which is what I’ve been using lately to great success with Gemini 2.5 for free).

So to clear something up, if it were an MCP, I just deploy it in a folder inside my workspace and add it to the .gitignore file. That way the memory is local but doesn’t get synced with the repo for purposes of deployment to a server. Do I have the right idea about that?

1 Like

I do actually want to build a LightRAG MCP server! Yes you understand that right!

2 Likes