For those who have followed my podcast transcription project, you’ve already seen some of the challenges I’ve tackled in previous posts, such as exploring transcription methods in “Cracking the Code”, building the foundation of my AI system in “The Great Podcast Download”, and grounding my AI model in my podcast history in “Building an AI System”.
With these pieces in place, my next challenge was automating the transcription process for the entire podcast archive. This meant creating a tool that could handle large directories of podcast episodes, efficiently transcribe each one, and ensure a seamless workflow.
I’ve worked in many different programming languages throughout my career, which often means I forget the exact syntax or module names when starting a new project. Usually, I end up spending a fair bit of time looking up syntax or refreshing my memory on specific libraries. But for this project, I wanted to try something different. Because I work so closely with large AI models, I was curious to see how far I could get by having the model write all the code, while I focused on describing the system in plain English.
What followed was an incredibly productive collaboration, where the model not only responded to my requests but helped refine my ideas, transforming a basic script into a robust transcription tool. In this post, I’ll walk through how that collaboration unfolded and how the model contributed to the development of a powerful solution that now automates a key part of my podcast project.
The Task at Hand
The initial goal was simple: automate the transcription of my podcast archive. The podcasts were stored in a directory structure where each podcast series had its own folder, and within each folder were multiple episodes as .mp3 files. I needed a tool to efficiently transcribe these episodes using Whisper, an open-source automatic speech recognition model.
I didn’t have a fully defined set of requirements from the start. Instead, the process was organic—each iteration with the AI model led to new ideas and improvements. What started as a basic transcription shell script slowly evolved as I refined it with more features and considerations that became clear through the development process.
For example, initially, I simply wanted to loop through the podcast files and transcribe them. But after the first draft, it became obvious that the script should be able to:
- Process a directory of podcasts: Loop through each podcast folder and its
.mp3 files to ensure only the correct audio files were processed.
- Handle re-runs: If the script was run multiple times, it shouldn’t re-transcribe files that had already been processed.
- Recover from interruptions: If the script were interrupted or crashed, it should pick up where it left off without needing to start over.
- Simulate a run (Dry Run): Before making changes, it would be useful to simulate the process to confirm what the script was about to do.
- Generate useful statistics: At the end of the process, I wanted a summary of how many episodes were processed, how many had already been transcribed, and how many were transcribed during the current run.
These requirements evolved naturally as I worked through the project, guided by how the AI model responded to my needs. Each time I described what I wanted in English, the model would generate code that not only met my expectations but often inspired new ways to improve the system.
The next step was to start iterating on this evolving solution, and that’s where the collaboration with the AI really began to shine.
Iterative Development with AI
The development process with the AI model was truly collaborative. I would describe a new feature or refinement I wanted, and the model would generate code that worked surprisingly well. With each iteration, the script became more powerful and refined, responding to both my immediate needs and unforeseen challenges that emerged along the way.
First Step: Starting with a Bash Script
Initially, I started with a simple bash script to iterate over each .mp3 file in the podcast directory and transcribe it using Whisper. The script was straightforward, but as I began adding more features—like error handling and checking for existing transcriptions—it became clear that the complexity was growing. Bash wasn’t the right tool for this level of logic, so I decided to ask the AI model to convert the script to Python. The transition was smooth, and Python provided the flexibility I needed for more sophisticated control flow.
#!/bin/bash
# Directory containing podcast files
DIRECTORY="/opt/podcasts"
# Path to the Whisper executable
WHISPER_PATH="/home/allen/whisper/bin/whisper"
# Check if the directory exists
if [ -d "$DIRECTORY" ]; then
for FILE in "$DIRECTORY"/*; do
if [ -f "$FILE" ]; then
echo "Transcribing $FILE"
"$WHISPER_PATH" "$FILE" --output_dir "$(dirname "$FILE")" --output_format txt
fi
done
else
echo "Directory $DIRECTORY does not exist."
fi
Second Iteration: Basic Transcription Script in Python
Once we moved the script to Python, the first version was simple: iterate over a directory of podcast .mp3 files and use Whisper to transcribe them. The model generated a Python script that correctly handled reading the files and transcribing them using the Whisper command-line tool. This version worked perfectly for basic transcription, but I quickly realized that additional features were needed as the project evolved.
import os
import subprocess
DIRECTORY = "/opt/podcasts"
WHISPER_PATH = "/home/allen/whisper/bin/whisper"
def transcribe_podcasts():
if os.path.isdir(DIRECTORY):
for filename in os.listdir(DIRECTORY):
file_path = os.path.join(DIRECTORY, filename)
if os.path.isfile(file_path):
print(f"Transcribing {file_path}")
subprocess.run([WHISPER_PATH, file_path, "--output_dir", os.path.dirname(file_path), "--output_format", "txt"])
else:
print(f"Directory {DIRECTORY} does not exist.")
transcribe_podcasts()
Third Iteration: Adding Dry Run Mode
After the initial transcription script, I realized it would be helpful to simulate a run before making any changes. I asked the model to add a “dry run” mode, where the script would only print out the files it intended to transcribe without actually performing the transcription. This feature gave me confidence that the script would do what I expected before it ran on my actual data.
import os
import subprocess
import argparse
DIRECTORY = "/opt/podcasts"
WHISPER_PATH = "/home/allen/whisper/bin/whisper"
def transcribe_podcasts(dry_run=False):
if os.path.isdir(DIRECTORY):
for filename in os.listdir(DIRECTORY):
file_path = os.path.join(DIRECTORY, filename)
if os.path.isfile(file_path):
if dry_run:
print(f"Dry run: would transcribe {file_path}")
else:
print(f"Transcribing {file_path}")
subprocess.run([WHISPER_PATH, file_path, "--output_dir", os.path.dirname(file_path), "--output_format", "txt"])
else:
print(f"Directory {DIRECTORY} does not exist.")
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Transcribe podcasts using Whisper")
parser.add_argument("-d", "--dry-run", action="store_true", help="Perform a dry run without actual transcription")
args = parser.parse_args()
transcribe_podcasts(dry_run=args.dry_run)
Fourth Iteration: Idempotency
The next improvement was addressing idempotency. Since I had a large collection of podcasts, I didn’t want the script to re-transcribe episodes that had already been processed. I needed a way to detect whether a transcription file already existed and skip those files. I explained this in plain English, and the model quickly generated a check for existing transcription files, only processing files that hadn’t already been transcribed.
import os
import subprocess
import argparse
DIRECTORY = "/opt/podcasts"
WHISPER_PATH = "/home/allen/whisper/bin/whisper"
def transcribe_podcasts(dry_run=False):
if os.path.isdir(DIRECTORY):
for filename in os.listdir(DIRECTORY):
file_path = os.path.join(DIRECTORY, filename)
transcription_file = os.path.splitext(file_path)[0] + ".txt"
if os.path.isfile(file_path):
if os.path.exists(transcription_file):
print(f"Skipping {file_path}: transcription already exists.")
else:
if dry_run:
print(f"Dry run: would transcribe {file_path}")
else:
print(f"Transcribing {file_path}")
subprocess.run([WHISPER_PATH, file_path, "--output_dir", os.path.dirname(file_path), "--output_format", "txt"])
else:
print(f"Directory {DIRECTORY} does not exist.")
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Transcribe podcasts using Whisper")
parser.add_argument("-d", "--dry-run", action="store_true", help="Perform a dry run without actual transcription")
args = parser.parse_args()
transcribe_podcasts(dry_run=args.dry_run)
Fifth Iteration: Handling Incomplete Transcriptions
As the script matured, I realized another edge case: what happens if the transcription is interrupted or the script crashes? In such cases, I didn’t want partially completed transcriptions. So, I asked the model to handle this scenario by using temporary “in-progress” files. The model created a mechanism where a temporary file would be generated at the start of transcription and deleted only upon successful completion. If the script detected an “in-progress” file on the next run, it would clean up and start fresh, ensuring that no partial transcriptions were left behind.
import os
import subprocess
import argparse
DIRECTORY = "/opt/podcasts"
WHISPER_PATH = "/home/allen/whisper/bin/whisper"
TEMP_FILE_SUFFIX = ".transcription_in_progress"
def transcribe_podcasts(dry_run=False):
if os.path.isdir(DIRECTORY):
for filename in os.listdir(DIRECTORY):
file_path = os.path.join(DIRECTORY, filename)
transcription_file = os.path.splitext(file_path)[0] + ".txt"
temp_file = transcription_file + TEMP_FILE_SUFFIX
if os.path.isfile(file_path):
if os.path.exists(temp_file):
print(f"Detected unfinished transcription for {file_path}.")
os.remove(temp_file)
elif os.path.exists(transcription_file):
print(f"Skipping {file_path}: transcription already exists.")
else:
if dry_run:
print(f"Dry run: would transcribe {file_path}")
else:
print(f"Transcribing {file_path}")
open(temp_file, 'w').close() # Create temp file
try:
subprocess.run([WHISPER_PATH, file_path, "--output_dir", os.path.dirname(file_path), "--output_format", "txt"])
finally:
if os.path.exists(temp_file):
os.remove(temp_file)
else:
print(f"Directory {DIRECTORY} does not exist.")
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Transcribe podcasts using Whisper")
parser.add_argument("-d", "--dry-run", action="store_true", help="Perform a dry run without actual transcription")
args = parser.parse_args()
transcribe_podcasts(dry_run=args.dry_run)
Final Iteration: Adding Statistics
The last feature I asked for was a way to track progress and output useful statistics at the end of each run. I wanted to know how many .mp3 files had been processed, how many had already been transcribed, and how many were transcribed during the current session. The model quickly integrated these statistics into the script, both for dry runs and actual transcription runs.
import os
import subprocess
import argparse
DIRECTORY = "/opt/podcasts"
WHISPER_PATH = "/home/allen/whisper/bin/whisper"
TEMP_FILE_SUFFIX = ".transcription_in_progress"
stats = {
"total_mp3_files": 0,
"already_transcribed": 0,
"waiting_for_transcription": 0,
"transcribed_now": 0
}
def transcribe_podcasts(dry_run=False):
if os.path.isdir(DIRECTORY):
for filename in os.listdir(DIRECTORY):
file_path = os.path.join(DIRECTORY, filename)
transcription_file = os.path.splitext(file_path)[0] + ".txt"
temp_file = transcription_file + TEMP_FILE_SUFFIX
if os.path.isfile(file_path) and file_path.endswith(".mp3"):
stats["total_mp3_files"] += 1
if os.path.exists(temp_file):
print(f"Detected unfinished transcription for {file_path}.")
os.remove(temp_file)
elif os.path.exists(transcription_file):
print(f"Skipping {file_path}: transcription already exists.")
stats["already_transcribed"] += 1
else:
if dry_run:
print(f"Dry run: would transcribe {file_path}")
stats["waiting_for_transcription"] += 1
else:
print(f"Transcribing {file_path}")
open(temp_file, 'w').close()
try:
subprocess.run([WHISPER_PATH, file_path, "--output_dir", os.path.dirname(file_path), "--output_format", "txt"])
stats["transcribed_now"] += 1
finally:
if os.path.exists(temp_file):
os.remove(temp_file)
print("\n--- Transcription Statistics ---")
print(f"Total MP3 files processed: {stats['total_mp3_files']}")
print(f"Already transcribed: {stats['already_transcribed']}")
if dry_run:
print(f"Waiting for transcription: {stats['waiting_for_transcription']}")
else:
print(f"Transcribed during this run: {stats['transcribed_now']}")
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Transcribe podcasts using Whisper")
parser.add_argument("-d", "--dry-run", action="store_true", help="Perform a dry run without actual transcription")
args = parser.parse_args()
transcribe_podcasts(dry_run=args.dry_run)
Collaboration with the Model
What struck me most about this process was how natural and intuitive it felt to work with the AI model. Over the years, I’ve spent a lot of time learning and working with different programming languages, which often means looking up syntax or refreshing my memory on specific libraries when I start a new project. But in this case, I was able to offload much of that effort to the model.
At every step, I provided the model with a plain English description of what I wanted the script to do, and it responded by writing the code. This wasn’t just basic code generation—it was thoughtful, well-structured solutions that responded directly to the needs I described. When I wanted something more specific, like a dry run mode or idempotency, the model not only understood but implemented those features in a way that felt seamless.
That said, my own programming experience was still critical throughout this process. While the model was incredibly effective at generating code, I relied heavily on my background in software development to guide the model’s work, define the system’s architecture, and debug the output when necessary. It wasn’t just about letting the model do everything—it was about using my expertise to spot edge cases, identify potential issues, and ensure that the code the model produced was robust and reliable.
The most remarkable aspect of this collaboration was the ability to iterate. I didn’t need to sit down and write out a complete, detailed spec for the entire project from the beginning. Instead, I approached the model with a rough idea of what I needed, and through a series of interactions, the project naturally grew more sophisticated. The model helped me refine the initial concept and introduce new features that I hadn’t considered at the outset.
This dynamic, back-and-forth interaction mirrored the kind of iterative workflow I often use when collaborating with colleagues. The difference, of course, is that this was all happening in real time with an AI model—without needing to dig into documentation, refactor code, or troubleshoot syntax issues.
In the end, I found that the model wasn’t just a tool for automating transcription; it became a partner in developing the solution itself. By offloading the technical nuances of code writing to the AI, I was able to focus more on the high-level design of the system.
Working with the AI model on this project demonstrated to me the potential of AI-assisted development—not as a replacement for programming skills, but as a highly effective augmentation to those skills. My programming knowledge was still a vital part of guiding the project, but with the model handling much of the heavy lifting, I could focus on the overall architecture and problem-solving. For me, that’s an incredibly exciting shift in the way I approach building systems.
Announcing Podcast-Rag: A Comprehensive Podcast
Retrieval-Augmented Generation (RAG) System
I’m excited to announce the open-source release of Podcast-Rag, a project that began as the podcast transcription tool described in this article and is evolving into something much more. Podcast-Rag will eventually become a comprehensive podcast RAG system, integrating with a large model to offer powerful insights and automated workflows for managing large-scale podcast archives.
What is Retrieval-Augmented Generation (RAG)?
In the context of AI and natural language processing, Retrieval-Augmented Generation (RAG) is a powerful concept that combines the strengths of information retrieval with text generation. The idea is simple: instead of generating text purely from a model’s pre-trained knowledge, RAG systems search for relevant documents or data from a knowledge base and use that information to produce more accurate and contextually rich responses.
Imagine a large language model working alongside a search engine. When the model is asked a question, it retrieves the most relevant documents or podcasts from a repository, like my own archive, and uses that information to generate a response. This allows RAG systems to provide highly informed answers that go beyond the limits of a pre-trained model’s knowledge.
For Podcast-Rag, this approach will be pivotal. The long-term goal is to combine transcription and retrieval to build a system that can dynamically surface relevant episodes, segments, or quotes based on user queries. By integrating RAG, we’ll not only transcribe podcasts but also empower users to retrieve and interact with specific pieces of information from an entire podcast archive. This takes podcast management and analysis to a new level of intelligence, making the system more interactive and useful for tasks like research, content discovery, and more.
Right now, the system includes robust transcription features, handling everything from large directories of podcast episodes to ensuring that transcriptions are idempotent and recover gracefully from crashes. It also offers dry run mode and detailed statistics for each run.
But this is just the beginning. Over time, Podcast-Rag will evolve into a full-featured system that integrates AI to provide rich interactions and insights, transforming how podcast archives are managed and analyzed.
You can explore the current state of the project, contribute to its growth, or use it to streamline your transcription workflows by visiting the Podcast-Rag repository on GitHub.
Conclusion
This project was far more than an exercise in automating podcast transcription—it was a firsthand experience in seeing the potential of AI-assisted development. Over the years, I’ve written a lot of code, and I’ve always approached new projects with the mindset of leveraging my programming expertise. But working with the AI model shifted that dynamic. By letting the model handle the code generation, I was able to focus more on the overall system design, while still relying on my background to guide the development process and resolve any issues.
What really stood out during this collaboration was how natural the process felt. I could describe my requirements in plain English, and the model responded by generating code that was not only functional but often elegant. The model adapted to new requests, introduced features I hadn’t thought of, and iterated on the script in a way that mirrored working with another developer.
That said, the AI didn’t replace my programming skills; it augmented them. My experience was still critical to ensuring the script worked as expected, debugging when necessary, and refining the overall system. The model handled the details of coding, but I provided the architecture and oversight, creating a powerful synergy that made the development process faster and more efficient.
In the end, this project showed me just how transformative AI-assisted development can be. It allows developers to focus on the high-level design and logic of a system while offloading much of the code-writing burden to the model. For me, that’s an exciting new way to build solutions, one that feels more collaborative and less about getting bogged down in syntax or boilerplate.
This experience has left me eager to explore more ways AI can assist in development. Whether it’s refining future scripts, automating other parts of my workflow, or pushing the boundaries of what’s possible in AI-driven projects, I’m more convinced than ever that AI will be a critical part of how I approach coding in the future.
Like this:
Like Loading...