Did it work?
Project Arachne is goooooooo!
So, at the end of last week I wanted to have 5 books posted up in Polish by the end of the weekend. We’re not quite there yet, but that’s because I’ve been figuring out how to run multiple things in parallel! Lemme give you the overview of my translation engine and how things are rolling so far.
Here’s the high level picture of things:
[ ] **Phase 1: Foundation**- [x] **Step 0: Source Prep**
- [x] Convert `.docx` to Markdown.
- [x] Clean artifacts.
- [x] **Step 1: Chunking**
- [x] Split full source into `CHXXX_source.md`.
- [ ] **Step 2: Term Harvest**
- [ ] Scan for new glossary terms (The “Nightmare” entities?)
- [ ] **Phase 2: Translation (The Diamond Run)**
- [ ] **Batch 1: CH001-CH005**
- [x] **CH001** (Diamond Run)
- [x] Draft (Claude) -> Done (Fresh)
- [ ] Editor Audit (GPT-5.2) -> *Catch Errors*
- [ ] Verification Pass (GPT-5.2) -> *Verify Fixes*
- [ ] SmellHunt (Claude) -> *Generate Suggestions Only*
- [ ] Drift Adjudication (Gemini) -> **FINAL DECISION**
- [ ] Final Output -> (Immutable)
- [ ] **CH002**
- [ ] **CH003**
- [ ] **CH004**
- [ ] **CH005**
- [ ] **Batch 2: CH006-CH010**
... (To be expanded)
- [ ] **Phase 3: Assembly**
I’ll be sharing the back end scripts for each of these layers and tips for running them over the course of the upcoming week for paid subscribers — but you’re welcome to talk to your AIs about this flow and work them out on your own!
Basically though, this is the cycle it runs through on EVERY chapter, and then there’s a dash of secret sauce at the end.
It’s trivially easy for any AI to pull together the output into an epub, and I’m sitting on two full Polish epubs now!
Basically what took the time over the weekend was conceptualizing things about how this’ll run in the future and how to separate components so that they can mooooosssttllly run without human intervention.
So — in Antigravity, you can only call one model per chat, so I’m having to use API calls for Claude and GPT 5.2 (and it’s important that you do so, for the cage match, otherwise one AI alone will get lazy and pedantic.)
Since those are scripts, you can run them sequentially, on their own — but next book (which I’m’n’a start up after I finish this post!) I’mma run with Claude’s batch function, which’ll be about half the API rate (or so they claim! I will report back!) on blocks of ten chapters at a time, so that we catch drift before it happens, but also will make everything cheaper for me.
Now — currently — I’m having to nudget AG’s Gemini every few chapters for the adjucation step, where it determines whether or not Claude’s final set of suggestions are any good, because eventually it times out without any human intervention.
I figured out (from it working independently on Claude and 5.2) that I could script that as a Gemini API call (rather than using the Gemini 3 Pro I’ve got it set on inside the AG chat itself) — and I was willing to pay for the automation of that step except that….
All my shit’s high heat.
And the second I tossed a steamy chapter to Gemini’s API it freaked out and blocked the content.
Soooooooo I’m not going to be able to get to a fully push-button system personally — I’ll have to keep relying on the Gemini 3 Pro chat inside AG that I’m working in for that final step, because it ‘gets’ my project and is not clutching pearls — but that’s OK, because the quality of what I’m getting out is astounding.
As Gemini 3 said, inside this chat:
Working so hard with AG for the past week’s been INTENSELY illuminating about how it wants to think and perform (and ofc each model you can select through has its own quirks!) — but I’m really thinking like a system’s archetect now.
And basically every project I attempt is going to be downhill from here, LOL, because pretty much it has to be, I jumped into the deep end of the pool.
In between doing Polish stuff yesterday I coded up the beginnings of a social media machine and today I’m working on something that’ll combine all my facebook ads and amazon attribution data into an Ads Arbitrator — that’ll be Project Athena (Imma come up with an entire Olypmus suite, muhahaha) and that’ll be SO much easier to do because it’ll just be csv files, lol.
My new goal, re: Polish, is to get these books up by EOD Tues (‘cause I am gonna have to nudge AG’s Gemini for my final step, alas, alack) but I should be able to stack books up to nudge over night, so it’ll just be a background thing while I work personally, and not a big ass ‘need to monitor it intensely’ deal.)
I wanna babble at y’all but lemme go code this first — just trust that I will be very chirpy here coming up! <3
xo!
Cassie
PS: OH! I also found out how much a normal full run of this sytem on one book — $19.70 for my book Guarded by the Nightmare, which is 75059 words — so my engine’s costing me $0.000262 cents a word.
So it’s not nothing? But also, on a word by word basis, pretty effin’ fierce for a perfect-for-publication-translation!



Can't wait for this!