Fraymotifs: Crystallizations of Memory
It's never enough
I can tell you the exact moment I realized it was possible to generate Fraymotifs with the Paradox Engine.
It was Sunday, July 21st, 2025 at 17:48 UTC, when I rushed to my Discord chat and hurriedly typed "holy fuck i just realized how to make fraymotifs with this thing". I really thought I was done after adding alchemy, but an idea struck me.
My friend Reach already made a Homestuck Discord bot that does titling/"classpecting" and alchemy, as well as other fun things such as character generation and rating how much of a "Vriska" your character is, but as far as I'm aware, it operates on one very long prompt, and/or a fine-tuned model, combined with the user's request. It does use structured outputs and Python objects for alchemy, but I'm not sure how they're used on the backend. She's a much better writer than I am, having written multiple novels, and a good engineer as well, having built Google CoLab notebooks for the public to use AI art models before 2022, when they were released by corporations and used to generate endless slop. "Before it was cool" in this case was "when it was cool"; now, just about anyone can do it.
My background's a bit different; I finished my degree in computer science and math fairly recently, and started work as a backend engineer for a small startup. I was in charge of everything, so I had to learn everything. System design for me wasn't locked behind seniority requirements and approval paperwork; if there was stuff that had to be built, after just a few months, it fell on me to build it. As a result of my academic and industrial training, of course my ideal system of alchemy involved an SQLite database and a logically-backed set of operations.
One thing her bot didn't have, but would excel at if it did, was fraymotifs. I took it upon myself to build them. What follows is largely a creative writing project, designed by an engineer.
The deep lore
Fraymotif aren't really explained in Homestuck. In case you missed it in the past couple posts, Homestuck is a webcomic by Andrew Hussie featuring four children that play a magical video game by entering its parallel dimension. Most of the first half of the comic is filled with computer science references and video game mechanic jokes, defining its early setting. One example is the Fraymotif, which is some kind of combo attack that never gets elaborated on. They get mentioned occasionally throughout the comic, and are shown on screen in a big fight scene at the very end, but they're not explained further other than their names, which have something to do with the player's aspect1, musical terms, and occasionally a reference to the player themselves.
Part of the reason I like Homestuck is admittedly because of its ripeness for self-insertion,2 because of its power system that doubles as typology, and relative "flatness" of the early characters with additional characterization driven by fanworks and later story arcs. You, too, can be a 13-year-old kid playing a magic video game! 10 years after 13, I would much rather make up the kids than be the kids, but it's still an open world with a setting that invites you to participate somehow. Fraymotifs have to mean something to the maker, so I took a page right out of the other Homestuck project I also participate in.
A fraymotif to the Paradox Engine is a crystallization of a memory. In the multiverse containing all possible universes, Paradox Space, there is occasionally a memory held by a player so powerful that it turns into a physical object, a "crystal" of sorts, that floats through the void between universes. Occasionally, these land in sessions of SBURB, the magical video game the kids play, and get picked up by consorts, the inhabitants of the lands that the kids adventure on, to get sold to adventurers. If the players resonate with the memory of a fraymotif, they can tap into the power contained within and unleash massive damage upon their enemies.
All the user needs to provide is the player's Title, a memory, and any additional information about the character, and the Paradox Engine will crystallize the memory into a fraymotif.
Formatting whims and woes
I wanted these to be in a specific format. I imagined a buildup, an announcement of the attack, and the effects unleashed, so I just prompted for it. This was fairly straight-forward; I told Claude (or Llama, or any LLM, really) what a fraymotif is in the terms above, and made a schema for structured output: a thinking space to consider possible effects, a "visual component" to describe the appearance of the fraymotif as it's used as the "buildup", the name of the fraymotif itself, and the "mechanical component", or effects of said fraymotif. After providing a couple examples I wrote myself for names and effects, and a couple constraints such as "don't imply a gender for the user unless specified", it was good to go.
Well, almost. There were a few hiccups in formatting that I had to fix, such as splitting the message into chunks of 2000 characters each. I forgot to do that even after I did it for classpecting, whoops! I was also disappointed that the best I could get in terms of a list of structured data for the player titles was requiring the user to enter a comma-separated list in near-exact formatting. When you follow the instructions, it's pretty fun.
Further experimentation
Classpecting involved learning web scraping, and alchemy needed a local database with implementations of codes, but all things considered, fraymotifs were pretty straightforward to build. How could I make them less straightforward?
"Prompt engineering", the art of commanding LLMs to do things with words, is notoriously finnicky and irreplicable, which is why it's perfect for LinkedIn gurus; they can get lucky once and claim to hold some secret sauce you don't have. Thanks to a couple nameable people on Twitter, it's since evolved into "context engineering", which includes other computational methods of choosing which words to put into the LLM, such as "a search engine". Incredibly genius and novel stuff, wow.
The best actual "prompt engineers" I know are creative writers who see LLMs for the word-calculators they are, and are able to use their pre-existing creative writing skills to get the LLM to output more good writing. To paraphrase one of them, this takes about as much effort as you'd need to just write a book the normal way. In my opinion, that means if you want to write a book, you should therefore just write the book, but there are so many forms of media outside of "books", and this is one where creative writing skill will directly affect how engaging the final piece of media is.
DSPy promises to do away with "prompt engineering" entirely. It wants to turn prompting into a more traditional, replicable engineering pipeline; rather than trying to explain to the LLM what you mean in words, you instead define your inputs and outputs, you supply examples, you pick an optimizer, and you run it, eventually getting a prompt that will hopefully perform the best when creating future examples. No writing needed whatsoever. This is meant for tasks where there's one right answer, such as answering questions or classifying sentences. Creative writing can't be judged by a "right answer", but there's generally a sphere of answers people would expect, or another sphere that people wouldn't expect but would more than gladly accept. I already had a couple examples of fraymotifs on hand from the existing prompt, and I had input and output schemas defined for the structured output because I have a Pydantic addiction; it was possible that a simple "semantic distance" optimizer would be able to judge that "closeness" to an "expected" or "surprising but enjoyable" answer. I just had to find out.
After some poking around with JSON files, it outputted a "DSPy program", which is DSPy's term for an optimized set of prompts that will perform the best possible on your given task. Here, the task was "making a new fraymotif". My sample size was 1 or 2, so it didn't perform too well on the metrics, but I looked over the prompt and surprisingly, it got the intent! I wouldn't expect it to output what a "fraymotif" was from first principles, but here it was in another JSON file. Would it write better or worse than my original prompt if trained on more examples? I was considering finding out, but I had enough for that night.
The implications were staggering; truly personalized, interactive media could now be produced from prior examples alone, given an existing LLM. The current strategy is to scrape the entire internet to form a "base layer" of knowledge, and either fine-tune or prompt engineer it for some downstream objective task. DSPy released papers showing non-trivial improvement when using the right optimizers and a large-enough sample size; maybe smaller, locally-run models could be used, and individual creative writers could write a set of their own examples to flesh out a world, and through those examples, create an interactive piece of media for a reader to immerse themselves in the author's creation.
Explained in part 1 of this series.
AY YO PHRASING