Gemini CLI's Polarizing Release on Hacker News – A Visual Analysis

· kian's blog


Gemini released a new developer oriented CLI tool. This tool has a relatively generous free tier of 60 requests per minute, 1000 requests per day. When large language models (LLMs) were initially released, there was a lot of fear that it would automate many white collar jobs: however, their unreliability and risk for off-the-rails behavior in tasks without easy verification or ground truths. Developer oriented tooling seems to be a better fit for a productivity tool that depends on validation.

These models benefit from aggressively from data labelling: the difference between relatively quiet gpt-3 and gpt-3.5 model that took the world by storm was millions of hours spent manually answering questions that the model was struggling with. This release seems to be Google's attempt to improve adoption for Gemini for coding: a model that has impressed many engineers but seems to have been held back by the greater GCP ecosystem.

To that end, the Gemini CLI launch was met with aggressively mixed reviews by the hacker news community. I ingested all the comments on the hackernews post and kicked off a interactive quick analysis.

I first built the sunburst below.

What is a sunburst? A sunburst is a hierarchical visualization of data. In most cases on my website, I use it to represent a hierarchical analysis of text data created by bayesian mixture models. This model learns a hierarchical set of topics and applies them to each word, sentence, paragraph, and document in a structured and explainable way.

The innermost circle of the sunburst is the title of the analysis. Each intermediate ring is a high level "topic group." Each outer ring is a granular "topic" that is a subset of a topic group. The size of each slice is directly proportional to how prominent the topic or topic group is in the dataset.

Hacker News Discussion Sunburst

A few items that really stood out to me:

Google API Confusion #

I pulled up all the excerpts that discuss this. The TLDR is that of the 700 comments on this post, ~80 of them mention in some form the confusion regarding the separation and overlap between Google's AI Studio, Vertex, and the Gemini API. Perusing the discussion, it seems like "difference between Vertex and Gemini APIs is that Vertex is meant for existing GCP users and Gemini API for everyone else."

source

A few posts to highlight

Actually, that's the reason a lot of startups and solo developers prefer non-Google solutions, even though the quality of Gemini 2.5 Pro is insanely high. The Google Cloud Dashboard is a mess, and they haven't fixed it in years. They have Vertex that is supposed to host some of their models, but I don't understand what's the difference between that and their own cloud. And then you have two different APIs depending on the level of your project: This is literally the opposite of what we would expect from an AI provider where you start small and regardless of the scale of your project, you do not face obstacles. So essentially, Google has built an API solution that does not scale because as soon as your project gets bigger, you have to switch from the Google AI Studio API to the Vertex API. And I find it ridiculous because their OpenAI compatible API does not work all the time. And a lot of tools that rely on that actually don't work.

source

An interesting thing is that Google AI offers are much more confusing than the OpenAI ones — despite the fact that ChatGPT models have one of the worst naming schemes in the industry. Google has confusing model names, plans, API tiers, and even interfaces (AI Studio, Gemini app, Gemini Web, Gemini API, Vertex, Google Cloud, Code Assist, etc.). More often than not, these things overlap with one another, ensuring minimal clarity and preventing widespread usage of Google’s models. source

More Excerpts
Gemini CLIblog.google2025-06-25

Actually, that's the reason a lot of startups and solo developers prefer non-Google solutions, even though the quality of Gemini 2.5 Pro is insanely high. The Google Cloud Dashboard is a mess, and they haven't fixed it in years. They have Vertex that is supposed to host some of their models, but I don't understand what's the difference between that and their own cloud. And then you have two different APIs depending on the level of your project: This is literally the opposite of what we would expect from an AI provider where you start small and regardless of the scale of your project, you do not face obstacles. So essentially, Google has built an API solution that does not scale because as soon as your project gets bigger, you have to switch from the Google AI Studio API to the Vertex API. And I find it ridiculous because their OpenAI compatible API does not work all the time. And a lot of tools that rely on that actually don't work.

Gemini CLIblog.google2025-06-25

I will say as someone who uses GCP as an enterprise user and AI Studio in personal work, I was also confused about what Google AI Studio actually was at first. I was trying to set up a fork of Open NotebookLM and I just blindly followed Cursor’s guidance on how to get a GOOGLE_API_KEY to run text embedding API calls. Seems that it just created a new project under my personal GCP account, but without billing set up. I think I’ve been successfully getting responses without billing but I don’t know when that will run out.. suppose I’ll get some kind of error response if that happens..

Gemini CLIblog.google2025-06-25

An interesting thing is that Google AI offers are much more confusing than the OpenAI ones — despite the fact that ChatGPT models have one of the worst naming schemes in the industry. Google has confusing model names, plans, API tiers, and even interfaces (AI Studio, Gemini app, Gemini Web, Gemini API, Vertex, Google Cloud, Code Assist, etc.). More often than not, these things overlap with one another, ensuring minimal clarity and preventing widespread usage of Google’s models.

Gemini CLIblog.google2025-06-25

It took me a while but I think the difference between Vertex and Gemini APIs is that Vertex is meant for existing GCP users and Gemini API for everyone else. If you are already using GCP then Vertex API works like everything else there. If you are not, then Gemini API is much easier. But they really should spell it out, currently it's really confusing.

Gemini CLIblog.google2025-06-26

Is it really that confusing? Gemini is the equivalent of ChatGPT; AI Studio is for advanced users that want to control e.g. temperature; Vertex AI is the GCP integrated API; Notebook LLM is basically personal RAG; and Jules is a developer agent.

Gemini CLIblog.google2025-06-26

I’m a small time GCP customer for five or six years, and relatively tech competent, and I had a very difficult time getting Gemini code set up yesterday with Vertex API keys; finally I had to use gcloud to login from the CLI in combination with clicking a link and doing web sign on from Gemini. This frustrated me, not least because I have API direct calls to Vertex Gemini working from Aider, although I could not tell you exactly what incantation I finally used to make it work. In particular it didn’t look to me like the Gemini code app uses something like dotenv? I don’t recall now; upshot - could get it to tell me I was logged in wrong / had an oauth2 error / needed a project id at various times, but no inference.

Gemini CLIblog.google2025-06-26

2. There should be one unified API, not two! That'll help scale products with ease. Currently Google recommends using Google AI Studio API for simple projects and one-off scripts, and Vertex for "real" projects. No other competitor does this (look at OpenAI for instance).

Gemini CLIblog.google2025-06-26

Ah I think I see based on the other comment but just to confirm - you want to use Vertex provided Gemini API endpoints without having to create a Google Cloud project. Is that correct? (I’m just trying to get as precise about the problem statement and what success looks like - that helps me figure out a path to the best solution.)

Gemini CLIblog.google2025-06-25

I already use goose[1]. It lets me connect through OpenRouter. Then I can use Gemini without having to give Google Cloud my credit card. Also, OpenRouter makes it easier to switch between models, deals with Claude's silly rate limiting messages gracefully, and I only have to pay in one place.

Gemini CLIblog.google2025-06-25

Having played with the gemini-cli here for 30 minutes, so I have no idea but best guess: I believe that if you auth with a Workspace account it routes all the requests through the GCP Vertex API, which is why it needs a GOOGLE_CLOUD_PROJECT env set, and that also means usage-based billing. I don't think it will leverage any subscriptions the workspace account might have (are there still gemini subscriptions for workspace? I have no idea. I thought they just raised everyone's bill and bundled it in by default. What's Gemini Code Assist Standard or Enterprise? I have no idea).

Gemini CLIblog.google2025-06-25

Not if you're in EU though. Even though I have zero or less AI use so far, I tinker with it. I'm more than happy to pay $200+tax for Max 20x. I'd be happy to pay same-ish for Gemini Pro.. if I knew how and where to have Gemini CLI like I do with Claude code. I have Google One. WHERE DO I SIGN UP, HOW DO I PAY AND USE IT GOOGLE? Only thing I have managed so far is through openrouter via API and credits which would amount to thousands a month if I were to use it as such, which I won't do.

Gemini CLIblog.google2025-06-26

Creating an API key from AI Studio automatically creates a Google Cloud project in the background for you. You can see it when you're logged into the console or via `gcloud projects list`

Gemini CLIblog.google2025-06-25

Nice, at least i could get rid of the broken Warp CLI which prevents offline usage with their automatic cloud ai feature enabled.

Gemini CLIblog.google2025-06-25

Totally fair. Yes, Google AI Studio [https://aistudio.google.com] lets you do this but Google Cloud doesn't at this time. That's super duper irritating, I know.

Gemini CLIblog.google2025-06-25

> I think the difference between Vertex and Gemini APIs is that Vertex is meant for existing GCP users and Gemini API for everyone else

Gemini CLIblog.google2025-06-25

>When you use Unpaid Services, including, for example, Google AI Studio and the unpaid quota on Gemini API, Google uses the content you submit to the Services and any generated responses to provide, improve, and develop Google products and services and machine learning technologies, including Google's enterprise features, products, and services, consistent with our Privacy Policy.

Gemini CLIblog.google2025-06-25

AFAIK you can very easily get an API key from AI studio without creating any cloud project

Gemini CLIblog.google2025-06-25

I just use gemini-pro via openrouter API. No painful clicking around on the cloud to find the billing history.

Gemini CLIblog.google2025-06-25

The key to running LLM services in prod is setting up Gemini in Vertex, Anthropic models on AWS Bedrock and OpenAI models on Azure. It's a completely different world in terms of uptime, latency and output performance.

Gemini CLIblog.google2025-06-25

Fast following is a reasonable strategy. Anthropic provided the existence proof. It’s an immensely useful form factor for AI.

(intentionally?) Confusing Privacy Policy #

So opt-in is the default. You have to explicitly opt out. However, the major grievance is that in the policy it seems to state that even if you opt-out, Google's internal teams still retain the right to use your data.

Some key excerpts:

The first section states "Privacy Notice: The collection and use of your data are described in the Gemini Code Assist Privacy Notice for Individuals." That in turn states "If you don't want this data used to improve Google's machine learning models, you can opt out by following the steps in Set up Gemini Code Assist for individuals.". That page says to use the VS Code Extension to change some toggle, but I don't have that extension. It states the extension will open "a page where you can choose to opt out of allowing Google to use your data to develop and improve Google's machine learning models." I can't find this page.

source

However, as others pointed out, that link take you to here:https://developers.google.com/gemini-code-assist/resources/p...Which, at the bottom says: "If you don't want this data used to improve Google's machine learning models, you can opt out by following the steps in Set up Gemini Code Assist for individuals." and links tohttps://developers.google.com/gemini-code-assist/docs/set-up.... That page says "You'll also see a link to the Gemini Code Assist for individuals privacy notice and privacy settings. This link opens a page where you can choose to opt out of allowing Google to use your data to develop and improve Google's machine learning models.These privacy settings are stored at the IDE level."

source

Another dimension here is that any "we don't train on your data" is useless without a matching data retention policy which deletes your data. Case and point of 23andMe not selling your data until they decided to change that policy.

source

Usage statistics includes "your prompts and answers", see the last paragraph in the ToS. I have no idea why legal insists we write "statistics" rather than "data".

source

More Excerpts
Gemini CLIblog.google2025-06-25

* The first section states "Privacy Notice: The collection and use of your data are described in the Gemini Code Assist Privacy Notice for Individuals." That in turn states "If you don't want this data used to improve Google's machine learning models, you can opt out by following the steps in Set up Gemini Code Assist for individuals.". That page says to use the VS Code Extension to change some toggle, but I don't have that extension. It states the extension will open "a page where you can choose to opt out of allowing Google to use your data to develop and improve Google's machine learning models." I can't find this page.

Gemini CLIblog.google2025-06-25

To help with quality and improve our products (such as generative machine-learning models), human reviewers may read, annotate, and process the data collected above. We take steps to protect your privacy as part of this process. This includes disconnecting the data from your Google Account before reviewers see or annotate it, and storing those disconnected copies for up to 18 months. Please don't submit confidential information or any data you wouldn't want a reviewer to see or Google to use to improve our products, services, and machine-learning technologies.

Gemini CLIblog.google2025-06-25

However, as others pointed out, that link take you to here:https://developers.google.com/gemini-code-assist/resources/p...Which, at the bottom says: "If you don't want this data used to improve Google's machine learning models, you can opt out by following the steps in Set up Gemini Code Assist for individuals." and links tohttps://developers.google.com/gemini-code-assist/docs/set-up.... That page says "You'll also see a link to the Gemini Code Assist for individuals privacy notice and privacy settings. This link opens a page where you can choose to opt out of allowing Google to use your data to develop and improve Google's machine learning models.These privacy settings are stored at the IDE level."

Gemini CLIblog.google2025-06-25

>To help with quality and improve our products, human reviewers may read, annotate, and process your API input and output. Google takes steps to protect your privacy as part of this process. This includes disconnecting this data from your Google Account, API key, and Cloud project before reviewers see or annotate it. Do not submit sensitive, confidential, or personal information to the Unpaid Services.

Gemini CLIblog.google2025-06-25

* For individuals: "When you use Gemini Code Assist for individuals, Google collects your prompts, related code, generated output, code edits, related feature usage information, and your feedback to provide, improve, and develop Google products and services and machine learning technologies."

Gemini CLIblog.google2025-06-26

Is there any way for a user using the "Login with Google ... for individuals" auth method (I guess auth method 1) -- to opt-out of, and prevent, their input prompts, and output responses, from being used as training data?

Gemini CLIblog.google2025-06-26

It's great to see Google expanding Gemini into a CLI tool, but I do have concerns about data usage. While it’s free for certain requests, the fact that both prompts and outputs can be used to train the model raises privacy and usage questions. Clearer transparency and control over data sharing would be appreciated.

Gemini CLIblog.google2025-06-25

Collection means it gets sent to a server, logging implies (permanent or temporary) retention of that data. I tried finding a specific line or context in their privacy policy to link to but maybe someone else can help me provide a good reference. Logging is a form of collection but not everything collected is logged unless mentioned as such.

Gemini CLIblog.google2025-06-25

We need laws that these megacorps have to show in an easy and understandable form which data is collected and what happens to the data. If they do fail to explain this (in 5 sentences or less) - they should pay insane fines per day. It is the only way (and solves the debt crisis of the US at the same time). It is ridiculous that we do have this situation in 2025 that we do not know which data is processed or not.

Gemini CLIblog.google2025-06-25

"If you don't want this data used to improve Google's machine learning models, you can opt out by following the steps in Set up Gemini Code Assist for individuals."

Gemini CLIblog.google2025-06-25

Another dimension here is that any "we don't train on your data" is useless without a matching data retention policy which deletes your data. Case and point of 23andMe not selling your data until they decided to change that policy.

Gemini CLIblog.google2025-06-26

Usage statistics includes "your prompts and answers", see the last paragraph in the ToS. I have no idea why legal insists we write "statistics" rather than "data".

Gemini CLIblog.google2025-06-25

To be honest this is by far the most frustrating part of the Gemini ecosystem, to me. I think 2.5 pro is probably the best model out there right now, and I'd love to use it for real work, but their privacy policies are so fucking confusing and disjointed that I just assume there is no privacy whatsoever. And that's with the expensive Pro Plus Ultra MegaMax Extreme Gold plan I'm on.

Gemini CLIblog.google2025-06-25

I mean, using data that has been explicitly opted out of training paves the way for lawsuits and huge administrative fines in various jurisdictions. I might be naive, but I don’t think that’s something OpenAI would deliberately do.

Gemini CLIblog.google2025-06-25

You raise an interesting topic. Right now, when we think about privacy in the AI space, most of the discussion hinges on using our data for training purposes or not. That being said, I figure it won’t be long before AI companies use the data they collect to personalize ads as well.

Gemini CLIblog.google2025-06-25

That bears no relation to OpenAI using data for training purposes. Although the court’s decision is problematic, user data is being kept for legal purposes only, and OpenAI is not authorized to use it to train its models.

Gemini CLIblog.google2025-06-25

Google recently testified in court that they still train on user data after users opt out from training [1]. The loophole is that the opt-out only applies to one organization within Google, but other organizations are still free to train on the data. They may or may not have cleaned up their act given that they're under active investigation, but their recent actions haven't exactly earned them the benefit of the doubt on this topic.

Gemini CLIblog.google2025-06-25

Don’t know about Claude, but usually Google’s free offers have no privacy protections whatsoever — all data is kept and used for training purposes, including manual human review.

Gemini CLIblog.google2025-06-25

I think you did a good job CYA on this, but what people were really looking for was a way to opt-out of Google collecting code, similar to the opt-out process for the IDE is available.

Gemini CLIblog.google2025-06-25

> For API users, we automatically delete inputs and outputs on our backend within 30 days of receipt or generation, except when you and we have agreed otherwise (e.g. zero data retention agreement), if we need to retain them for longer to enforce our Usage Policy (UP), or comply with the law.

Binary Packaging Challenges aka Node.js #

~50 comments discuss their irritation with the requirement for node, instead wishing for a single compiled binary for this feature. There also seems to be split between users who prefer limited dependencies and portability and others who prefer to offload this management and logic to package managers.

Some relevant discussion excerpts:

Imagine the ease of a single ".targz" or so that includes the correct python version, all pips, all ENV vars, config files, and is executable. If you distribute that - what do you still need pip for? If you distribute that, how simple would turning it into a .deb, snap, dmg, flatpack, appimg, brew package, etc be? (Answer: alot easier than doing this for the "directory of .py files. A LOT)

source

As someone who's packaged Javascript(node), Ruby, Go and rust tools in .debs, snap, rpms: packaging against a dynamic runtime (node, ruby, rvm etc) is a giant PIAS that will break on a significant amount of users' machines, and will probably break on everyones machine at some point. Whereas packaging that binary is as simple as it can get: most such packages need only one dependency that everyone and his dog already has: libc.

source

You might not know the reason ppl use package managers. Installing this "simple" way make it quite difficult to update and remove compared to using package managers. And although they are also "simple", it's quite a mess to manage packages manually in place of using such battle-tested systems

source

More Excerpts
Gemini CLIblog.google2025-06-26

Imagine the ease of a single ".targz" or so that includes the correct python version, all pips, all ENV vars, config files, and is executable. If you distribute that - what do you still need pip for? If you distribute that, how simple would turning it into a .deb, snap, dmg, flatpack, appimg, brew package, etc be? (Answer: aloteasier than doing this for the "directory of .py files. A LOT)

Gemini CLIblog.google2025-06-26

As someone who's packaged Javascript(node), Ruby, Go and rust tools in .debs, snap, rpms: packaging against a dynamic runtime (node, ruby, rvm etc) is a giant PIAS that will break on a significant amount of users' machines, and will probably break on everyones machine at some point. Whereas packaging that binary is as simple as it can get: most such packages need only one dependency that everyone and his dog already has: libc.

Gemini CLIblog.google2025-06-26

Just `wget -O ~/.local/bin/gemini-clihttps://ci.example.com/assets/latest/gemini-cli` (Or the CURL version thereof)

It can pick the file off github, some CI's assets, a package repo, a simple FTP server, an HTTP fileserver, over SSH, from a local cache, etc. It's so simple that one doesn't need a package manager. So there commonly is no package manager.

Gemini CLIblog.google2025-06-26

But for workstation, a lot of people wants the latest, so the next solution was to be able to abstract the programming language ecosystem from the distribution (And you may not have a choice in the case of macOS), so what we get is directory-restricted interactions (go, npm,..) or doing shell magic so that the tooling think it's the system (virtual env,...).

Gemini CLIblog.google2025-06-26

Having a sensible project is what make it easy down the line (including not depending on gnu libc if not needed as some people uses musl). And I believe it's easy to setup a repository if your code is proprietary (Just need to support the most likely distribution, like ubuntu, fedora, suse's tumbleweed,...)

Gemini CLIblog.google2025-06-26

You might not know the reason ppl use package managers. Installing this "simple" way make it quite difficult to update and remove compared to using package managers. And although they are also "simple", it's quite a mess to manage packages manually in place of using such battle-tested systems

Gemini CLIblog.google2025-06-26

I find this frustrating because when switching between different Node versions for various projects, gemini-cli might not be compatible with all of them. That means even if I’ve installed it globally, it won’t work in some directories, as .nvm changes the node version. then I have to install different copies of gemini-cli for each version of node I work in.

Gemini CLIblog.google2025-06-25

If you have to run end point protection that will blast your CPU with load and it makes moving or even deleting that folder needlessly slow. It also makes the hosting burden of NPM (nusers) who must all install dependencies instead of (nCI instances), which isn't very nice to our hosts. Dealing with that once during your build phase and then packaging that mess up is the nicer way to go about distributing things depending on NPM to end users.

Gemini CLIblog.google2025-06-26

It's more like I want my OS package manager to be handling global packages (personal preference), and there is a higher chance of it being included in official OS repositories if it is packaged as a precompiled binary with minimal dependencies.

Gemini CLIblog.google2025-06-25

While writing this comment, thinking that there should be some packaging tool that would create a binaries from npx cli tools.

I remember such things for python.

Binaries were fat, but it is better then keep nodejs installed on my OS

Gemini CLIblog.google2025-06-25

That is point not a line. An extra 2MB of source is probably a 60MB executable, as you are measuring the runtime size. Two "hello worlds" are 116MB? Who measures executables in Megabits?

Gemini CLIblog.google2025-06-25

You'd think that, but a globally installed npm package is annoying to update, as you have to do it manually and I very rarely need to update other npm global packages so at least personally I always forget to do it.

Gemini CLIblog.google2025-06-25

How so? Doesn’t it also make updates pretty easy? Have the precompiled binary know how to download the new version. Sure there are considerations for backing up the old version, but it’s not much work, and frees you up from being tied to one specific ecosystem

Gemini CLIblog.google2025-06-25

Language choice is orthogonal to distribution strategy. Youcanmake single-file builds of JavaScript (or Python or anything) programs! It's just a matter of packaging, and there are packaging solutions for both Bun and Node. Don't blame the technology for people choosing not to use it.

Gemini CLIblog.google2025-06-25

I don’t think that’s true. For instance, uv is a single, pre-compiled binary, and I can just run `uv self update` to update it to the latest version.

Gemini CLIblog.google2025-06-25

It depends a lot on what the executable does. I don’t know the hello world size, but anecdotally I remember seeing several go binaries in the single digit megabyte range. I know the code size is somewhat larger than one might expect because go keeps some type info around for reflection whether you use it or not.

Gemini CLIblog.google2025-06-25

And it works hella better than using pip in a global python install (you really want pipx/uvx if you're installing python utilities globally).

Gemini CLIblog.google2025-06-25

Note, I haven't checked that this actually works, although if it's straightforward Node code without any weird extensions it should work in Bun at least. I'd be curious to see how the exe size compares to Go and Rust!

Gemini CLIblog.google2025-06-25

I ran the npm install command in their readme, it took a few seconds, then it worked. Subsequent runs don't have to redownload stuff. It's 127MB, which is big for an executable but not a real problem. Where is the painful part?

Gemini CLIblog.google2025-06-25

It would be more accurate to say I packaged it. llamafile is a project I did for Mozilla Builders where we compiled llama.cpp with cosmopolitan libc so that LLMs can be portable binaries.https://builders.mozilla.org/Last year I concatenated the Gemma weights onto llamafile and called it gemmafile and it got hundreds of thousands of downloads.https://x.com/JustineTunney/status/1808165898743878108I currently work at Google on Gemini improving TPU performance. The point is that if you want to run this stuff 100% locally, you can. Myself and others did a lot of work to make that possible.

The broader trend #

I remember the first year or even longer, every new LLM that was released was met with a lot of excitement. Every company mildly connected to data or text analysis was releasing them and they were generally met with excitement and enthusiams. It seems today the discussion around LLMs has changed from something that will change employment and capitalism as we know it to another development tool in a developers tool belt. And yes, this is not a new llm or model, but just a new cli. But the discussion around LLMs has changed.

LLMs discussion on Hacker News

There's no large point here. The negative response to the Gemini CLI, offering free access, a large context window, but with a dubious privacy policy and heavy weight wrapper just really stood out to me how much the response and landscape around language models has changed. From magic bullet to an unwieldy axe that is more likely to hurt you than your challenge unless you are a true expert in wielding it.

Full Analysis #

You can see the full sunburst analysis here. You can view the interactive slope plot here.

last updated: