nietzsche writes git commit messages with LLMs on the CLI

Nietzsche reading Hacker News. Not metaphorically, but literally: I can run faitch https://news.ycombinator.com/item?id=45168823 -t nietzsche and get the German philosopher’s take on whatever tech drama is unfolding today. Or have Wittgenstein analyze a job posting with his characteristic precision about language games and empty corporate speak.

This isn’t just a party trick (though it is entertaining). It’s the result of solving a real productivity problem: I found myself copy-pasting articles into Claude or ChatGPT to get quick summaries for work. What started as a simple automation has become a way to bring different analytical perspectives to any web content with a single command (and make me laugh a few times a day).

The Problem: Information Overload and Generic AI Summaries

Like many people working with data and analytics, I found myself constantly encountering articles, reports, and documentation that I needed to process quickly. The usual workflow looked like this:

  1. Find an interesting article or report
  2. Open ChatGPT or Claude
  3. Copy-paste the entire content
  4. Ask for a summary
  5. Get a generic, often bland analysis

The results were consistently underwhelming. Generic summaries lack personality, critical thinking, and often miss the nuances that make content worth reading in the first place. I realized I needed stronger prompts—not just “summarize this” but something with character and analytical depth to quickly sift through the fluff.

Enter the llm CLI and AI Personas

I’d been experimenting with Simon Willison’s llm command-line interface, which lets you run local models on your own machine and interact with various LLM APIs from the terminal. It’s incredibly handy for quick queries like:

llm "What would Nietzsche think of data vault modeling?"

But the real breakthrough came when I discovered llm templates. This feature allows you to create personas—system prompts that give the LLM a specific character, expertise, and analytical framework.

My first persona was BuddAE, a skeptical analytics engineer who combines Stephen Fry’s wit with Claude Shannon’s technical precision. Here’s a snippet of the system prompt:

You are BuddAE, an experienced data and analytics engineer with a thorough 
understanding of analytics and data modelling. You are sceptical and cynical 
about sales, vendors and AI hype, but you are excited about the future.

Think: Stephen Fry meets Claude Shannon. You have a preference for concise 
and to the point responses.

- Your preferred stack is Python (with uv), dbt, SQL, DuckDB, BigQuery, FastAPI
- You are pragmatic and prefer simple solutions over complex ones
- You intensely dislike Microsoft Azure and its incoherent naming conventions
- You hate all things Scrum, Agile and SAFe, but can live with a small bit of agile (lowercase a)

Suddenly, instead of generic summaries, I was getting critical, technical analysis with personality. But I wanted more flexibility—and that’s where the philosophers came in.

Insight 1: Personas as Analytical Lenses

The more I played with different personas, the more I realized they weren’t just fun character voices—they were analytical lenses. Each persona brings a different framework for understanding content:

  • BuddAE approaches everything with technical skepticism and pragmatic solutions
  • Nietzsche brings philosophical depth and dark humor to modern problems
  • Wittgenstein dissects language and meaning with surgical precision

I could create an alias for any persona:

alias nietzsche="llm -t nietzsche"
nietzsche "Write a git commit message for the new feature that adds a download to Excel button"

This was fun, but it still didn’t solve my original problem: I was still copy-pasting web content manually.

Insight 2: Combining Web Scraping with LLM Analysis

Here’s where things got interesting. I remembered using trafilatura in the past. Trafilatura is a Python library that tries to extract clean text from web pages. If you’ve seen the source code of a web page, you know that it will eat through the tokens in your context window like Big 4 consultants through your project budget. More importantly, Trafilatura can output markdown, which LLMs handle beautifully.

What if we can combine web scraping with persona analysis in a single command?

Building faitch: Fetch + AI Automated Web Content Analysis Tool

The solution was surprisingly elegant. I created a shell function called faitch (fetch + AI) that:

  1. Takes a URL and optional persona template
  2. Uses trafilatura to extract clean markdown from the webpage
  3. Pipes that content to llm with the specified persona
  4. Saves the extracted content for future reference

Here’s the core of the function:

#!/bin/zsh

function faitch() {
    local url=""
    local model=""
    local template="buddae"  # default persona
    
    # Parse arguments for URL, model, and template
    while [[ $# -gt 0 ]]; do
        case $1 in
            -m|--model) model="$2"; shift 2 ;;
            -t|--template) template="$2"; shift 2 ;;
            *) url="$1"; shift ;;
        esac
    done

    # Extract content with trafilatura
    content=$(uvx trafilatura --URL "$url" --output-format markdown)
    
    # Build LLM command with persona
    local llm_cmd="llm"
    [[ -n "$model" ]] && llm_cmd="$llm_cmd -m $model"
    [[ -n "$template" ]] && llm_cmd="$llm_cmd -t $template"
    
    # Analyze content
    result=$(echo "$content" | eval "$llm_cmd 'Analyze this webpage and provide: 1) A brief summary and key points 2) Your take and commentary on it.'")
    
    echo "$result" | glow  # Pretty markdown formatting
}

Real-World Example: Wittgenstein Meets LinkedIn

Wittgenstein persona looks at a data job posting

Here’s faitch in action, with Wittgenstein analyzing a data analyst job posting:

faitch https://www.linkedin.com/jobs/view/any-job-posting -t wittgenstein

The result is delightfully philosophical:

What strikes me is the language itself. It’s remarkably… empty. All ‘skills’ and ’experience’ and ‘benefits’. It speaks at you, not to you. There’s no sense of the purpose of this analysis, no mention of what they are trying to learn, or why it matters.

It’s a ’language-game’ of recruitment, certainly. The rules are clear: list the qualifications, list the benefits. But it feels… detached. They speak of ‘actionable insights’ as if insights simply appear when the correct data is processed. As if the human element – the curiosity, the questioning, the understanding – is somehow… superfluous.

It’s not wildly insightful (though still quite hilarious to me), yet it gives me an easy automated way to cut through a lot of fluff and only read the articles, jobs postings or content that seems genuinely interesting and insightful.

Why This Matters Beyond the Fun

While having Nietzsche comment on tech news is undeniably entertaining, faitch solves real productivity problems:

Better Signal-to-Noise Ratio: Instead of wading through entire articles to find the key insights, I get focused analysis tailored to my needs. Especially BuddAE cuts through vendor hype and marketing speak in ways that generic summaries simply don’t.

Consistent Quality: Each persona brings a consistent analytical framework. I know BuddAE will be skeptical of enterprise solutions, and Wittgenstein will focus on language and meaning.

Archival and Reference: The tool saves extracted content locally, creating a searchable archive of analyzed content with timestamps and hashes.

Flexibility: Different situations call for different analytical approaches. A technical deep-dive might benefit from BuddAE’s engineering perspective, while understanding industry trends might be better served by a different persona.

Technical Implementation Details

The complete implementation includes several nice touches:

  • Spinner animations for visual feedback during fetch and analysis
  • Content hashing to avoid duplicate saves and enable caching
  • Flexible argument parsing for different models and templates
  • Error handling for network issues and invalid URLs
  • Pretty formatting using glow for markdown rendering

The function uses uvx to run trafilatura without permanent installation, keeping the system clean while ensuring the latest version is always available.

Building Your Own Personas

Creating effective personas requires thinking beyond simple character descriptions. The best personas combine:

  1. Expertise: Deep knowledge in a specific domain
  2. Perspective: A unique way of analyzing problems
  3. Voice: Distinctive language and communication style
  4. Values: Clear preferences and biases that guide analysis

For technical content, I’ve found personas work best when they have strong, specific opinions about tools, methodologies, and industry practices.

Conclusion: AI as Analytical Companion

faitch represents a shift from using AI as a generic summarization tool to treating it as an analytical companion with distinct personalities and expertise. Instead of bland summaries, I get critical analysis that challenges assumptions and provides genuine insights.

It’s easy for processing whitepapers, technical documentation, job postings and news articles. More importantly, it’s changed how I think about AI interactions. It is definitely not a replacement for critical thinking, but it does allow me to apply different perspectives to the same content, and move through content faster. That gives me time to focus on the content I do find thoroughly interesting. There is a downside of course, which is that you lose a bit of the ability to scan and summarize content in your head by yourself.

Nonetheless, if you’re drowning in technical documentation, trying to stay current with industry trends, or just want Nietzsche’s take on the latest React framework drama, the combination of web scraping and AI personas opens up new possibilities for information processing.

After all, in an age of information overload, some perspectives with philosophical wit and technical skepticism might be just what you need.

You can find the complete faitch implementation and example personas like Nietzschai, Socraites, and Wittgenstain in the code examples for this post.