Thoughts on Writing Good SRS Prompts, Part 1: What is a Good Prompt?

Some time ago, I worked through Andy Matuschak’s guide, How to write good prompts: using spaced repetition to create understanding, and took some personal notes. Revisiting the notes, I decided to write a couple of blog posts to help me consolidate my thoughts.

For the uninitiated, the guide is about using spaced repetition systems (SRS). One of the most popular SRS is an app called Anki. I’ve been a moderate user of Anki for a few years myself. These apps are like supercharged flashcards. How they work is that you write your cards, i.e. question prompts with answers, and regularly review the cards. That may not sound impressive, but behind the scene, these apps schedule your card reviews in a way that optimizes learning gain, according to cognitive science principles.

SRS is theoretically sound, and it has an enthusiastic user base. But how much you get out of it depends a lot on how well you use it. A big part of that is whether you can write effective prompts. Including Andy, many smart people (e.g. Michael Nielsen, Piotr Wozniak) have written a lot more about SRS, so for those interested, check out their works related to topics such as the mnemonic medium and tools for thought. There you should find writings and discussions more mature than my own.

What is a “Good” Prompt?

One thing I kept thinking about as I worked through Andy’s guide is how there are two ways a prompt can be “good.” One is the “what,” that the prompt pertains to knowledge worth learning to you. The other is “how,” that it works well in an SRS (spaced repetition system), such that it helps you learn that knowledge efficiently. Andy also mentioned this distinction in the guide.

In practice, the two notions of goodness are intertwined. But to see how they differ, imagine having a prompt that is about some knowledge that’s really valuable to you, but written in a way that’s hard to review. Or conversely, imagine a prompt that helps you easily remember some useless facts.

From an instructional design perspective, we could say that this is about goal and instruction. Goal is the “what,” and instruction is the “how.” In this language, we could say that writing good prompts is simultaneously about:

  • Defining good learning goals (write prompts worth reviewing)
  • Designing good instruction to reach goals (write prompts that are efficient to review)

I tried to see if I could put Andy’s prompt-writing principles in one of the two categories. Take the 5 properties of effective prompts — focused, precise, consistent, tractable, effortful. These aren’t much concerned with what knowledge you’re trying to gain, but more about how the prompts should be to make your practice sessions more effective. If I have an unfocused prompt with too many details to recall, I could improve it by splitting it into multiple prompts, but I’ll be learning the same things. I’ve only changed the instruction.

For something that targets better goals, take the lenses for conceptual knowledge — attributes and tendencies, similarities and differences, and so on. These lenses make you think: What do I really want, when I say I want to know/understand this? In what way, to what degree? Because how much and how far you adopt these lenses correspond to the depth of understanding you seek. It’s also about making decisions about what is important for you. Looking for keywords in procedures is very much a practice of identifying what you find valuable to learn.

When I evaluate my own prompts, I put on the instructional designer hat and ask myself whether my prompts target the right goals and are instructionally effective. This gives me an additional layer of mental structure over the more concrete techniques like those in Andy’s guide.

Is being aware of the distinction between goal and instruction important for writing better SRS prompts? Since writing SRS prompts is giving ourselves recurring learning tasks, it is nothing short of doing instructional design. Then, it should be helpful, if not important, to make this distinction… or so I thought. But on the flip side, unless one is also knowledgeable about what makes good goals and instructions, being able to differentiate them may not amount to much. So perhaps for most people, a grab bag of tips and tricks is more helpful than theories you can’t use without a web of associated knowledge.

My Guiding Principle for Writing Good Prompts

I’ve been experimenting with another notion of what a “good” prompt is, and it has to do with transferability. In other words, I want to learn things in a way that I can apply them in life.

A bit of background. I’ve used Anki for about three years. During this time, I learned a few things about learning sciences and instructional design, and I experimented with my SRS prompt-writing process as an application of what I’ve learned. A key idea that informs my prompt-writing is called cognitive task analysis, or CTA. For a given task, we could try to break down the mental process of how to complete the task into smaller component parts. In particular, I’m most interested in extracting component procedural skills in terms of ACT-R theory’s knowledge representation, i.e. “production rules.”

Jargons aside, what I’m looking for in a good prompt, or more precisely, the practice task of answering a prompt, is that it should match some authentic context. It should look like a little piece of something I might actually do.

This is, in fact, a pretty low bar to clear. Perhaps a bit cheating, but one scenario I often go to is explaining what something is to someone during a conversation. For example, I can imagine myself talking to someone when the topic of learning style comes up, and I need to recall the term of what’s that thing being discredited about learning styles… oh yes, the meshing hypothesis (and there’s card 1). Then I might be asked what that means (card 2). This is enough of a realistic scenario for a couple prompts around the topic.

By evaluating prompts this way, I’m naturally thinking more about the application context of each prompt during review sessions. I think this is a helpful thing to do, both for catching ineffective prompts to improve upon, and for reviewing in a way that I have a better chance of applying the knowledge when an opportunity arises.

With that as a guiding principle, I’d consider methods like those in Andy’s guide as supplementary aids to strengthen the prompts. But I tend to not worry about the prompts being, e.g. focused and tractable, as much as being able to contextualize the prompts.

Summary & The Challenge of Learning to Use SRS Effectively

So far, I’ve shared some thoughts on what a “good” prompt entails for me. From an instructional design perspective, good goals and good instruction are two general categories we could use to help frame the more specific tips and suggestions for writing prompts. I also shared how I use transferability as my primary guiding principle for writing better prompts.

Andy’s guide is quite optimistic about whether people can learn to use spaced repetition systems effectively. I share the same sentiment, but I also think it is intrinsically very challenging. The challenge isn’t specific to SRS per se, but common to any self-directed learning. Namely, it demands a great deal of metacognitive abilities. Developing metacognitive abilities is hard.

Effective use of SRS requires strong skills in self-reflection and self-direction. We need to constantly ask ourselves what matters for us to learn, and scaffold ourselves to learn effectively by writing better prompts. We need to introspect the way we think and act to break down a skill. We need to self-evaluate the results of our review sessions. We also need the discipline to maintain a regular routine for reviewing the prompts.

If we consistently use an SRS system and follow tips like those in Andy’s guide to improve the prompts, surely we’ll get better over time. But it won’t be simple, and we’d probably all face various challenges. For example, I have two problems that I don’t really have good answers to:

  • How do we encourage and scaffold the revision of old prompts?
  • How do we really know what our prompts should target?

The first problem comes from the fact that I often feel I should improve or update some prompts, but rarely actually do. The second problem is more theoretical, but quite serious, since we’re using SRS beyond simple memorization. There’s much more to say on that front. If I come around to write a part 2, I’ll share some thoughts on targeting goals and this idea about keeping things salient.

Leave a Reply

Your email address will not be published. Required fields are marked *