Google Gemini 2.0 (Exp-1219) Free!

I just wanted to throw something fun and pretty awesome out there!

I’ve been using Google Gemini 2.0 for Free the last few days on a few projects that I’ve been working on. And heresy I know, but I’ve actually been using Cline for VS Code as it offers a lot of awesome features still lacking in Bolt.diy (but I still see it as the future of AI coding). The problem with Cline is that it uses A LOT of tokens and was going through my Claude credits pretty fast…

So I tried to use LLama3.3-70B-Instruct, because it’s < 20¢ / MTok through Openrouter and pretty snappy too. Crazy, being that Claude is 15x+ that. But I wasn’t having much luck, and then I remembered Google offers Gemini Flash 2.0 (exp-1219 currently) for free and I heard there are no caps or limits. Microsoft also offers ChatGPT 4o through GitHub for Free as well, but with my testing it didn’t seem to work well for coding and I tried it once with Cline and ran out of my daily token limit within minutes… absurd.

Sadly, LLama3.3-70B-Instruct seemed to get stuck in logic loops and just rack up token usage even worse than Claude but it really didn’t cost much to test, so that’s fine. The real interesting part was when trying Gemini exp-1219, it seemed to do the best and by round two of back and forth, everything I asked for (in steps) was working. Impressive!

Gemini for me seems to work better than Claude even, is currently free to use and didn’t seem to have any limits. This was my token usage in one task session… image

If anyone else wants to sign up and try it out, please visit Google AI Studio to get an API_KEY.

P.S. If anyone else has any useful tidbits, I’d love to hear them. This is all experimental at this point and I just want to improve my workflow. Thanks and best!

1 Like

I fully agree. Also working the last days just with Gemini 2.0 and the experimental one in my project and it works pretty well. Sometimes if I let it changes very much and do longer coding session (>1h) it get a bit slower, but not a problem at all.
While it implements the stuff/features I need, it just use the time to think about other features/bugs/etc. I want to cover next :slight_smile:

1 Like

I just literally found out that DeepSeek-V3 just dropped with impressive results.

There website is currently down (probably crashed because Wes Roth just released a video for it. But I will definitely test it out tomorrow, and trying to tune into @ColeMedin stream though I stayed up way too late developing, lol.

They also dropped their cost (available directly though them or Openrouter) to $0.14/MTok (Input) and $0.28/MTok (Output), which is about half the price for input, about 75% cheaper per output token, and nearly 20x cheaper for cached. And less than the cost of Llama-70B-Instruct.

I know it was just released, but I can’t wait for the Instruct, Coder, and Quantized models to drop.

1 Like

But this is not working at the moment with bolt, right?

1 Like

Not working directly, but seems to be just fine through Openrouter. Also for that reason their chat demo is also not working.