Building Online Workshops: An AMA

Reading Time: 21 minutes

In the fall of 2022, I wanted to experiment with new teaching formats. I had been teaching at the University of Chicago for five school years, and I had been facilitating live online workshops for O’Reilly for a while too. Each of those formats has its own benefits and challenges, and I wanted to add self-service learning products to the mix. So I came up with some titles I’d be interested in producing, drafted up little presale cards for them, and blasted the links out on Twitter (which had not yet jumped the shark) to let folks vote with their wallets for which ones I’d produce.

Here we are, exactly a year later. So far I have released:

I also have a fourth one available for preorder that will publish in December:

And a fifth one, also still in preorder, that I have committed to finishing in 2024:

(By the way, if you’re the kind of person who likes to go all-in, I made a sale bundle that lets you buy all four of the 2023 releases at once for 15% off, if you like.)

I wanted to write about it, but honestly, it’s been a tiring year and I could not be arsed to write my own outline for this post. So instead I popped into my friendly neighborhood Slack and Discord channels and asked other people what questions they would like me to answer. Between those questions and the questions I’ve received via email, I compiled the list below, which I’ll answer inline. Enjoy!

How is independent release different for you from developing on-demand material for a publisher like O’Reilly?

So, some background here: O’Reilly also does self-service courses (they call it on-demand), and I wrote one for them. It’ll release in Spring 2024.

The big differences here are: marketing model, production model, and pricing model. Short version: O’Reilly offers an upfront sum and then a cut of consumer sales. They also have professional producers, pedagogical experts, question designers, and video editors available to help me write the content as well as handle all the post-filming steps. For indie stuff, I create my own upfront sum with preorders, and then I keep almost all the revenue from consumer sales. Every step of production is just…ya boi. No other pros helping me out.

I’ll address these also in greater detail below!

What’s your pricing model?

So, the nice thing about making your own stuff is that you get to keep most of the money. This has two advantages: first, you don’t have to sell as many copies to have a sustainable business. Second, you get to make choices based on what you want your business to be.

The Holy Grail of online businesses, according to the same internet that gave us Boaty McBoatface, is recurring revenue, usually executed through some sort of subscription model. This model gets you off the hook for making ad revenue while offering the thing that feels most comfortable for folks from work-a-day professions: the possibility of a relatively similar amount of money coming in on a regular basis, much like a paycheck.

I didn’t pick that model because a subscription model makes me responsible for churning out content at a steady pace. If I don’t do that, my subscribers aren’t getting their money’s worth, and then they leave, which ruins the ‘stable paycheck’ dream.

But I have learned that if I focus on cadence over quality, the noise-to-signal ratio of my stuff goes way up. That increases the price of my material for my subscribers. How? By increasing the amount of time they have to spend sorting through my drek before they find something worthwhile. My conviction is that my core audience has a tight time budget, and they will pay money to find content producers who respect that budget.

I watch people chase recurring revenue; nine out of ten of them end up with some kind of milquetoast weekly newsletter of relatively uninformed takes and link listicles. I’m not into it. Instead, I put a price on each piece of content I produce, so that as long as I have produced the content I’ve been paid for, I’ve fulfilled my obligations to everyone.

How do you divide up your course content?

This is another place where I break from the Boaty McBoatface Internet, on which content creators and their platforms advertise with phrases like “70+ hours of video!! 19 projects to try yourself!”

I have taught in classrooms for six years at this point, and I have spoken at conferences for longer than that. When it comes to talking at people, more is not always better. The more video a course contains and the more self-paced projects it requires, the lower the retention rate among my core audience: engineering leaders with other commitments. So I playtest 10-12 hours of content with my students, my colleagues, and my review groups, and then I cut, cut, cut until I have the 2-3 hours’ worth of material that produces the most post-event benefit for people who came.

So far, I have committed to keeping workshops at or under three hours of video, which is about the time domain for a self-paced product in which paying customers tend to get through all the material. My standard for success is delivering enough value for my price at that amount of content.

That price point for me? $219. I kept it super simple: approximately the amount people pay for a good textbook when not under duress from a teacher’s requirement list. This is also, coincidentally, about the total if you multiply the median U.S. senior software engineer salary, amortized per hour, by the amount of content in the workshop (2.5-3 hours). So it’s about the amount the average American tech company would be paying for my time if I were just a colleague of yours and took you in a meeting room and delivered you this material one on one. That felt fair. There are whole debates out there on pricing; I avoided them all so I could make some content. Your mileage may vary; I’m not telling you to do what I did.

What is the ratio of prep time to workshop time for you? Is it different for indie vs other delivery methods?

I start by compiling research from reading, consulting, and surveys. Then, I try out approaches for myself with clients and teams. I spend about ten hours directly involved in these tasks for each hour of content I produce, averaged out across all topics. So, for ten hours of material, we’re looking at roughly 100 hours.

Then, once I’ve got about ten hours of material, I start playtesting. I write tweet threads about them and see what response they get. I solidify them into blog posts and see how folks respond. I try out talks and activities with small groups of friends and colleagues who help me refine them. I also playtest the activities with clients and on teams I work with. Occasionally, I’ll show them to University students, too, or I’ll use them during the ‘bespoke content’ section of O’Reilly workshops (for multi-week workshops I spend weeks 1 and 2 on on prepared materials, then ask the participants what additional questions and scenarios they have, and design weeks 3 and 4 around that). I spend 20-30 hours playtesting material before I eliminate about half of it in a big cut.

The cut gets me down to 4 to 6 hours of the best stuff. A non-negligible proportion of that will be interactive portions or coding portions, which do not translate well to the online, self-paced format. So I instead hand those over to my live workshops, to go into the “prepared materials” weeks. The parts that translate well asynchronously—usually 2 to 3 hours of material—make it into the self-paced workshop, along with some activities that folks can do on their own. I spend about one work day selecting and scripting per 1.5 hours of material, so, for 3 hours of stuff, we’ll call it 12 hours.

Filming and editing usually takes about 6x as long as the videos themselves at the relatively low production value that I have used so far. I have no fancy studio. I also self-edit. I remove about 80% of the hiccups, but I don’t worry too much about a slightly awkward jump-cut or pause. So, for example, my last filming day took 6.5 hours and produced 1.5 hours’ worth of material, and then I spent another full day plus some overtime doing the editing, rendering, and uploading.

This number, by the way, is way higher for stuff of higher production value—especially if actual, skilled video editors do the video editing. Organizations like O’Reilly can stomach that kind of investment, but I am one person with a day job, a cat to feed, and seasonal depression. So I’ve decided that, if I’ve done my homework and have good stuff to say, my viewers can deal with the weird colors in my room or the cat jumping through the frame or whatever. A higher production value would force me to raise my price point, and I’m not ready for that. Anyway, for three hours of content, this is another 13 hours of filming and 18 hours of editing. We’ll call it an even 30.

So, in the end, 100 + 20-30 + 12 + 30 = 162-172 hours per workshop. If 20 people buy the workshop at full price, I gross 25 bucks an hour. For content creation, that’s not bad at all, and it admittedly doesn’t count the fact that I’m also paid to teach the University students and facilitate the live O’Reilly courses. If we took out the 20-30 for that reason, we’re at 142 not-otherwise-compensated hours, and 20 sales at full price nets me about 30 an hour.

That said, about half my sales aren’t full-price, so I come in at a somewhat lower hourly than that. See below.

What supporting infrastructure are you using for each course?

I have all of my courses on the Thinkific platform. I chose Thinkific for two reasons:

  1. They charge me a monthly flat fee, which made it super easy to calculate how many workshop copies I’d need to sell to break even on capital costs in the first year. I used that number to determine whether to continue the experiment past 2023 (Gross revenue crossed this threshold in the 2022 EOY presale).
  2. They host the video files for me. I had talked to other people who make online content, and I knew I did not have the energy for screwing around with this.

Thinkific gives you a course website to use, so I just use that. You can pay extra money to have it be, like, chelseatroycourses.com as opposed to a subdomain like mine (chelseatroy.thinkific.com), but I have not bothered to do that.

If you want to try Thinkific and you’d like an affiliate code from me, I have sad news to share, which is that I tried to get one and I received this email in reply:

Thinkific says that they can't make me an affiliate code unless I join them as an affiliate partner, which I can only do by getting referred from another affiliate partner via some specific UI that partners apparently have access to.

I am deeply sorry. But if you decide to use Thinkific and you email me about it, I’ll be happy to shoot over my hot tips.

Anyway, before I upload my videos onto Thinkific and put them in my workshops, I edit them with Descript. This recommendation came from Avdi Grimm. I don’t use any of their AI features so far; I just use it to cut my video as well as insert some text, slides, and images. I’ve used iMovie and QuickTime before; I drastically prefer Descript because it allows me to cut the video by removing text from the transcript. While I’m filming, if I make a mistake I know I’m going to want to cut, I mark it by saying the word “watermelon” so I can text search the transcript for mistakes later.

The uncut recording screenshots have become a topic of comedic relief on some Discord servers I frequent: particularly the ones that feature my cat’s, ahem, adversarial participation in filming.

A shot of Chelsea wearing an expression of calm exasperation while her fluffy cat, in the background, gapes at the camera with giant, crossed eyes. The transcript beneath discusses how to explain topics in groups before breaking off into "...watermelon....Nope, We're done."
Zoomed in, A shot of Chelsea wearing an expression of calm exasperation while her fluffy cat, in the background, gapes at the camera with giant, crossed eyes.

Unlike Thinkific, Descript did fork over an affiliate code. Thanks, Descript! Here’s that, for those of you who want to try it.

How do you market the workshops and spread the word about releases?

I have four email lists that I inform about releases:

  1. my blog subscribers
  2. my patreon subscribers
  3. folks who’ve ‘signed up as learners’ on Thinkific but have not bought a workshop from me before
  4. folks who have bought a workshop from me before

I just started sending progress updates on preordered workshops to lists 3 & 4, but besides that, I only communicate with any of these lists about my workshops to:

  1. Run a title presale. For example, my blog subscribers got invited to preorder titles at a 20% discount to help me decide what to produce when I started out. This will only happen for wallet-voting presales, which I have only ever done one time and haven’t decided if I’ll do again.
  2. Send a discount code upon each workshop’s release. Blog subscribers and learners receive a 10% discount code that works for 7 days following a release; Patreon patrons and folks who have bought before get a 20% discount code that works for 7 days following a release.

Besides being on one of these four lists when a release happens, there are three other ways to get a discount:

  1. Buy the 2023 bulk package.
  2. Certain workshops offer a 15% discount for the immediate follow-on purchase of another workshop.
  3. Corporate discounts for large numbers of seats. These days I do 20% off for 5-10 seats, 30% off for 10-20 seats, and 40% off for 20-30 seats. No one has ever bought 20+ seats so far.

I don’t run random sales because I don’t feel like it’s fair for someone to pay more for having bought before I ran a sale. You get a discount when you’re buying:

  • a promise in aid of my market research
  • right when I release a workshop, or
  • more than one workshop at a time.

That’s it.

I also tend to announce on Twitter and Mastodon, but almost all sales come from one of the lists. About half of individual course sales take advantage of the discounts.

Are you delivering on evergreen topics or tool-focused topics (Kotlin 1.9)?

I deliberately create technology-agnostic material. That’s true for talks and most live workshops, too. Why:

  1. There are lots of other people making great tool content; tech doesn’t need me to do this.
  2. I don’t want to commit to an update cadence for the same reason I don’t want to commit to a subscription model; I can’t promise indefinite ongoing work, and I don’t want to sell outdated content either.

How do you make evergreen content unique?

I pick a topic that everyone still hates and/or sucks at despite reams and reams of content having been published on the topic. I look for where and why the platitudinous content on the topic breaks in practice. Then I address that directly with new research and frameworks. This is why my content remains popular with folks who have read my stuff despite almost always having a title that somebody else has sworn won’t sell.1

For example, Most content about “addressing technical debt” focuses on refactoring, and despite there being tons of content on refactoring, every team still has uncomfortable amounts of tech debt. This happens for a laundry list of reasons, including:

  • The term ‘tech debt’ means different things to different people, so conversations about it often mean nothing and go nowhere
  • Teams don’t do a great job of advocating for time to support system maintainability, nor do they demonstrate progress in ways that help the business differentiate their work from valueless nerd onanism
  • When given the time to make their system maintainable, individual engineers spring for refactors that don’t actually reduce maintenance load, so the business is like “Just as I suspected: nerd onanism”
  • Context loss events make code hard to change over time, and teams do little, if anything, to guard against them or recover from them
  • Individual engineers make large changes out of step with the rest of the team, so that the rest of the team ends up resisting the change for perfectly good reasons

This is just a sample of the list, right? And most of the material out there isn’t talking about these issues beyond just complaining about them. So I looked for, and tested, solutions, and that’s the content of the workshop.

Similarly, with giving and receiving feedback, I found a list of reasons that it stays hard in spite of lots of discussion:

  • Most advice pretends that feedback givers can control whether their feedback is perceived as ‘kind,’ and they can’t
  • Most advice lets feedback givers just decide, without inquiry, whether, when, and about what to give feedback, which sets people up to fail to come off as kind
  • Very few resources talk about soliciting or receiving feedback as skills in and of themselves, placing all the onus on the givers to control a thing they can’t control, while also leaving prospective receivers out in the cold who want to know why no one gives them feedback anymore (especially if they’ve accumulated organizational power during their careers)

I looked for, and tested, solutions, and that’s the content of that workshop.

How about leading an inclusive technical team? Lemme get one thing out of the way with this one: I’ve had multiple people tell me “you can’t use the word ‘inclusive’ in a course title or people won’t buy it.” Look folks: I am visibly a dyke. People who get scared off by the word ‘inclusive’ aren’t in my audience anyway. My audience cares about inclusion in theory and has become frustrated with all the ways inclusion efforts suck in practice. Things like:

  • It’s nigh impossible to give someone at work inclusion feedback without triggering a fragility response because they decide they’re being called racist or sexist, and once that happens, the conversation goes nowhere productive in 99% of cases
  • Meetings that operate like “everybody just talk whenever you have something to say” account for the majority of meetings at tech companies, and they’re an inclusion nightmare
  • Technical leadership isn’t sure what exactly they can do to foster an inclusive environment on their team because most advice treats this as either an HR problem or an individual moral failing, both of which (scalding take here) are wildly inaccurate
  • Any effort that successfully improves inclusion outcomes on the team results in more disagreement because more people with underrepresented perspectives actually get to say something. To leaders who unwittingly value urgency, getting their way, and being right over achieving better product outcomes, that looks like a failure. So they put the kibosh on the inclusion efforts.

So what did I do? That’s right: I figured out solutions, I tested them, and I stuck them in a workshop.

For what it’s worth, Analyzing Risk in an Application works the same way. This is an online-ification of a workshop I have given live for years. It directly addresses how our risk analysis solutions are too complicated or not designed for iterative software processes. So then instead of using them, engineers just wing it, which also has problems. The workshop focuses on a framework with enough accuracy to be useful and enough simplicity to be used, and I’ve watched it help at several teams out of design ruts with my own two eyeballs.

Are you worried about AI producing content that competes with yours?

I am not. Generative machine learning systems work by ingesting a whole bunch of material and then spitting out patterns that sound plausible based on that material. They aren’t innovation machines: they’re plagiarization-at-scale machines.

If my content were rearrangements of what’s already out there, I’d worry about it. To be clear, I still do not think generative AI can produce materials of the quality of an actual human collator, provided the collator is willing to fact-check what they’re collating. I just think it’s possible customers wouldn’t notice or care enough to eliminate the AI version as sales competition for the human collator.

I endeavor pretty hard with my material, though, to say things that haven’t been said before enough times to make it very far up the LLM’s pattern probability list. An LLM is not going to stochastically parrot its way to “Technical Debt is a meaningless term” or “Feedback givers have no control over whether their feedback comes off as kind, actually” or “Fragility almost always makes it impossible to productively deliver inclusion feedback if you insist on highlighting the demographic patterns present in the recipient’s behaviors.” I’m not doing what ChatGPT and friends are designed to do.

What’s with the weird art?

I have to assume that this question pertains to the banners on my course pages:

Six course page banners appear in a grid. The first one shows a plastic yard flamingo, painted black with vampire teeth, stuck in a bed of hydrangeas with an out-of-focus cafe seating area in the background. The others show brightly colored acrylic paintings. Most have abstract shapes painted in red, orange, yellow, and green. One of them shows a glass coffee cup pouring out a latte, painted with gold coffee and white latte art, on top of a blue and white splatter background.

I wanted to avoid running afoul of any commercial use licenses for art or illustration, so I chose to make the course banners out of my own paintings and photography. The abstract ones all depict inspiration from individual songs: that red and yellow one characterizes Nina Simone’s “Feeling Good.” The green and yellow one next to it characterizes “When the Saints Go Marchin’ In,” a song I heard hundreds of times in my first few years of life in Thibodeaux, Louisiana. I don’t remember this, but according to my mother, I sang that song live at age 3 or so on Chelsea Pier in NYC in front of a crowd of onlookers and some poor busker who probably had no idea what they were getting into when they handed me the microphone.

The orange one with the green rectangles represents Valerie June’s “Got Soul,” and the green one with the orange soundwaves and the dark blue circle represents “40 ozs to Freedom” by Sublime. Fun fact: I painted both of those upon the request of my then-girlfriend, who hung them in her house while we were together, and then mailed them back to me two years after we broke up (I assume she moved house). Today, the Valerie June one hangs in my living room along with the Nina Simone one and a giclee print of a giant bubble painting that we bought together on a road trip through Grand Rapids.

The coffee cup painting has no special meaning. I glued magnets to the back and left it on my fridge for years until finally it fell off enough times that the corners got all busted up. The bundle there in the top left sports a photograph I took in the front patio of my local coffee shop around Halloween.

Listen, O’Reilly chooses random animals for its book covers, and folks love it. Consider these paintings and photographs my version of that.

Footnotes

1. I’ve been told “tech debt” doesn’t sell. I’ve been told “risk analysis” doesn’t sell. I’ve been told “inclusion” doesn’t sell. And that’s probably true to general audiences, so I’ll always be limited to my own little list until I come up with sexier titles. My deliberate practice of giving stuff clear and boring names works great for naming repositories in a discoverable way, but fails at getting strangers interested in my workshops. Let me know if you have any creative ideas.

If you liked this piece, you might also like:

This one, which talks more holistically about the different teaching engagements I do

The time I talked about code coverage tools (I’m not a tools talker for reasons you now understand, but this one discusses tool evaluation more than specific tools)

Outsmarting Zinger Fever on the Internet, all the more relevant on Melon Husk’s TAFKA Twitter

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.