I use Github Copilot and Claude daily to modify code in Python, Java and Shell scripts. I use assistants for:

  • code generation in tab completion mode.
  • generating code for tasks that are small and very well scoped.
  • understanding parts of code bases.

Next, I want to explore if these assistants can help me with large learning tasks such as:

  • Learning a new programming language such as Rust to write production-level code.
  • Understanding a complex code base in a programming language I am familiar with.

This is a series on my journey to learn Rust to implement a useful project.

Goals Link to heading

Learn Rust Link to heading

Why Rust? It is very popular in the database domain and I want to be able to use Rust projects for my work.

  • Similar to human languages, I learn to read/understand much faster than write/create.
  • The basic constructs are the same and easy to learn. Idiomatic constructs are hard.
  • I learn best by implementing a useful to me project.

AI Code Assistants should be able to help me with all these constraints.

pg-logstats Link to heading

I use pgBadger to analyze Postgres logs regularly. A major drawback is that pgBadger is implemented in Perl. I have to relearn Perl and maintain a toolchain to make edits. AI code assistants can help. However, there is no other use for Perl in my work or hobbies.

I am building pg-logstats in Rust as a modern PostgreSQL log analysis tool. It is inspired by pgBadger, especially for its simplicity and speed to useful insights. At the same time, the goal is to modernize the capabilities:

  • implemented in Rust.
  • interfaces with other terminal utilities to display data.
  • interfaces with common observability tools to display data visually.

Preparation Link to heading

AI coding assistants have been around for some time. There is a lot of information on experiences of implementing large projects with coding assistants. This X post summarizes the current capabilities of AI coding assistants.

Specifically, the following section is relevant.

Everything falls to pieces as that complexity curve increases. And the problem is that any good product design process has increasing complexity. A basic prototype turns into a good prototype as soon as it has layered interactions, transitions, good affordances, hover states, 1000 tiny little details that make something feel correct and real.

Another important learning is the best workflow for a large project.

AI Code Prep and Workflow

How I Code with AI on a budget/free.

The following section describes the workflow.

  1. Plan & Brainstorm: Use the smarter/free web models (Gemini 2.5, o4-mini, Claude 3.7, 4, o3, etc) to figure out the approach, plan the steps, identify libraries, etc.
  2. Generate Agent Prompt: Ask one of these smart models: “Write a detailed-enough prompt for Cline, my AI coding agent, to complete the following tasks: [describe tasks]”.
  3. Execute with Cline: Paste the step-by-step task list into Cline, configured to use a stable and efficient model like GPT 4.1 or Claude 3.5 (or Claude 4 if it is doing really complicated things).

Execution Link to heading

Plan & Brainstorm Link to heading

I used Claude to help me with plan the project. I started with the following prompt:

I want to implement a postgres log analysis tool inspired by pgbadger. pgbadger is a very mature tool with many features. I need your help in recreating the tool in phases. In each phase we have to choose the features to implement, design it and then help me create prompts to generate the code in an IDE. Shall we get started ?

Claude helped me plan the following phases:

  • Phase 1: Foundation & Core Parser
  • Phase 2: Query Analysis & Statistics
  • Phase 3: Time-based Analysis & Reporting
  • Phase 4: Advanced Features
  • Phase 5: Enterprise-ready features

I refined Phase 1 & Phase 2 with follow-up prompts and did not focus on later phases for a couple of reasons:

  • I want to iterate on learnings from using the features from the first two phases.
  • Claude assumed that pg-logstats will match all features of pgBadger. Many features planned in the later phases are not relevant.

Generate Prompts & Execute with Cline Link to heading

Claude generated 11 detailed prompts to generate the code. The prompts were submitted to Cline set up with the anthropic/claude-sonnet-4 model.

It took about 3 hours and approximately $20 USD to execute all the prompts. Similar to other experiences, the contexts used in APIs were large and repetitive. Multiple iterations were required to fix basic syntax errors (function visibility) and test failures. The quality of code generated decreased as the contexts got larger and instructions got more complex.

Fix and clean the project Link to heading

After executing all the prompts, the project was in a usable state. It had:

  • working code in Rust.
  • unit and integration tests.
  • demo scripts with Docker container definitions for a Postgres DB, workload generator and commands to analyze the logs.

However, certain parts required attention. pg-logstats has to parse queries. The generated code used regular expressions. I replaced it with sqlparser-rs and a mutating visitor to normalize SQL queries. I used Claude web to help me generate a code snippet after I failed to add this capability with IDE prompts.

ℹ️What is SQL Query Normalization ?

Normalization removes constants from SQL Queries.

Example:select .. from state = KA is converted to select .. from state = ?

Then it is possible to group similar queries.

Summary Link to heading

Overall, this project was a success. I was able to drastically cut down development time to create a useful for me project in a new language that I want to learn. The investment in time and USD was reasonable. I would have taken a couple of months to create the first version of the project without these tools.