
Photo by the author Canva
# Entry
Gemini Cli is a modern AI assistant in AI Google, which operates in the terminal. It transfers the Gemini (Gemini 2.5 Pro) language model directly to the coating so that you can ask questions, generate code, fix errors or create documentation without leaving the command line. “Gemini” is LLM, and “Gemini Cli” is basically a user tool that makes the model interactive in your work flows. In miniature, it’s like chatgpt for programmers. Google released Gemini CLI in June 2025 and it is FREE for natural persons. You just have to log in with your personal Google account and provides access to Gemini 2.5 Pro with a huge context window of 1 million tokens without costs (up to 60 requests/minutes and 1000 days). This is a great free alternative and open source for AI encoding assistants, such as Anthropic Claude.
Let me assist you with configuration and carry out some examples to emphasize its meaning.
# Configuring cli gemini in your system
To install the Cli Gemini, you need a command line (MacOS/Linux, PowerShell or similar in Windows) and Homebrew or node.js. On MacOS, the easiest method is by homebrew:
- Install Cli Gemini by Homebrew: Follow the following command in your terminal
- Alternatively, install via the node (any operating system): If you prefer or not utilize homebrew, install node.js (version 20 or newer. Then run:
npm install -g @google/gemini-cli
Or
npx https://github.com/google-gemini/gemini-cli
Installs Cli around the world on MacOS, Linux or Windows. Node.js V20+ is required; You can download it from nodejs.org or utilize NVM to manage versions.
After installing, you can simply start the following command to start Gemini-cli:
It should Start cli (If you see the ASCII banner “Gemini Cli”, you are set). If the twins are not found, it may be necessary to open a modern terminal or add a global Bin NPM to your path. You will see something like this:


Screenshot of the Gemini Cli starting
During the first gear, Gemini Cli will ask you to choose the color theme (lithe or shadowy), and then log in using a Google account. Follow the instructions in the browser (or cli) to authorize. If you prefer to utilize the API key instead of logging in, you can set up Gemini_api_key = “Your_KEY” In your environment (see Google AI Studio to generate the key). After authentication, the CLI confirms that he is ready for utilize.
# Launching the first commands of Cli Gemini
After configuring Cli Gemini, you can immediately start using natural language commands. Opens the prompt (marked >) Where do you enter questions or tasks. For example, let’s try with a plain monitor and ask: “Write a short paragraph about why Gemini Cli is amazing.” Here are the output data:


Gemini Cli screenshot: Straightforward writing paragraph
// Task 1: Repairing errors with the assist of cli gemini
Gemini Cli can integrate with tools such as Github or local GIT to find problems. For example, let’s utilize the built -in @search tool to download the GitHUB URL, and then ask for a repair plan:
Prompt (Source):
Here’s a GitHub issue: [@search https://github.com/google-gemini/gemini-cli/issues/4715]. Analyze the code and suggest a 3-step fix plan.
CLI identified the main cause and suggested how to modify the code. The screenshot below shows that it reports a 3-stage plan. You can view his plan and then confirm that Gemini CLI automatically apply changes to files.


Gemini Cli screenshot: Fixing errors
// Task 2a: Work with a project (plain example)
I created a project folder by cloning the Gitdiagram repository. If you want to learn more about this repository, go to my article: Understand the GitHub 10K+ repository without reading the code. Let’s get to our project folder, using:
Now run twins. You can start asking questions about the code. Let’s try the following prompt:
Prompt:
Explain the main components of this codebase
Gemini CLI scans files and will utilize the Gemini model to summarize or answer, as shown on the screenshot below:


Screenshot of Gemini CLI: Work with a project (plain example)
He parsuted folders and returned a structured summary (catalog lists such as SRC/, Public/etc.). This is useful for implementing modern projects or generating documentation.
// Task 2b: Work with the project (Advanced example)
Code database test is simple. Now let’s give him a more technical task to see how it turns out. We will give the following prompt to Gemini:
Prompt:
Analyze this repo for common performance anti-patterns.


Screenshot of Gemini CLI: Work with a project (advanced example)
The answer was so detailed that I took into account the initial part only on the screenshot. Gemini Cli created a detailed plan and then read each file independently. He finally shared a summary of potential prevention of performance along with the recommended subsequent steps:
Summary of potential performance anti-patterns based on initial analysis:
1. Huge Frontend Bundle Size:
* Mermaid Library: The mermaid library is likely a significant contributor to the bundle size. If it's not
lazy-loaded, it will impact initial page load performance.
* `react-icons`: Depending on how it's imported, it can also contribute to bundle size if not tree-shaken
effectively.
2. Unoptimized Image Usage: While Next.js has next/image, without inspecting the actual image usage in components,
it's strenuous to say if images are properly optimized (e.g., correct sizes, formats, lethargic loading).
3. Potential Backend Performance Issues (Python & Node.js):
* N+1 Queries: This is a common database anti-pattern that can significantly snail-paced down data retrieval.
* Lack of Caching: If frequently accessed data is not cached at the application or database level, it can lead
to redundant computations and database hits.
* Synchronous Operations: Blocking I/O in either backend could lead to performance bottlenecks under weighty load.
4. `reactStrictMode: false`: While not a direct anti-pattern, it can hide potential performance issues related to
React's rendering behavior during development.
5. Development-like Docker Volume Mount: Mounting the entire backend directory in the Docker container is less
optimal for production builds compared to copying only necessary files.
To confirm these, further investigation would be needed, including:
* Bundle Analysis: Using tools like @next/bundle-analyzer to identify vast modules in the frontend.
* Performance Profiling: Running the application and using browser developer tools (for frontend) and backend
profiling tools to identify bottlenecks.
* Code Review: Deep diving into the src/ and backend/ code to identify specific instances of the anti-patterns
mentioned.
These examples show how Gemini Cli turns plain hints into real actions. You can ask about the CODE, generate or re -refact, fix errors and improve performance, all from the terminal.
# Wrapping
Gemini Cli is a modern, powerful tool for programmers. After installing on MacOS (or any operating system), you can interact with Google’s Gemini LLM as easily as any local command. Some of the key functions that stand out are:
- React Agent Loop: He runs an agent React loop with a local environment internally. This means that he can decide when to call the tool (search, start the coating, edit the file) compared to when to answer directly. For example, if necessary, he obtained the URL from @search.
- Built -in tools: It has built -in “tools”, such as GREP, Echo, Read/Save, and you can call a network search or file system inquiries.
- Multimodal possibilities: Gemini CLI can even work with images/PDF (because Gemini is multimodal). Supports integration with protocol servers from an external model (MCP), e.g. you can connect an image generator (image) or non -standard API interface. This allows you to do such things as “generating code from this sketch” or “PDF summary”.
Try: After completing the above configuration, open the terminal in the project folder, enter Gemini and start experimenting. You will quickly see how companion AI in your scale can significantly augment your performance!
Canwal Mehreen He is a machine learning engineer and a technical writer with a deep passion for data learning and AI intersection with medicine. He is the co -author of the ebook “maximizing performance from chatgpt”. As a Google 2022 generation scholar for APAC, it tells diversity and academic perfection. It is also recognized as a variety of terradate at Tech Scholar, Mitacs Globalink Research Scholar and Harvard Wecode Scholar. Kanwalwal is a scorching supporter of changes, after establishing FemCodes to strengthen women in the STEM fields.
