Since you’re asking about ollama, the embeddings model you can use with ollama can be found here: Embedding models · Ollama Search
The endpoint is http://localhost:11434/api/embed
Most ollama users would use the nomic-embed-text model but my experience with that has been very, very slow on lower spec GPU/CPU.