Designing a Course 8: Use of Surveys

Reading Time: 11 minutes

Hi. I’m Chelsea Troy. I’m a computer science educator at the University of Chicago. I’m writing about the techniques I use to teach my distributed course, Mobile Software Development. You can see all the posts in the “Designing a Course” series right here!

In this post, I’ll talk about my favorite iterative tool—surveys.

Screen Shot 2020-08-21 at 10.07.22 AM

My students receive their first survey before the course begins.

I ask for their names, pronunciation of their names, and their pronouns. I ask them which programming languages they’ve written before.

I also ask students about themselves. What do they do in their spare time? What current events are on their mind? What are they hoping or planning to do after they graduate?

Having students’ name pronunciations and pronouns allows me to address them in class in a way that I’m confident they’ll feel seen.

I ask about programming languages because, in my first lecture, I teach Swift by comparing it to a language that people in the class already know. This technique helps students retain the information (Linda and Bruce Campbell talk more about that in this paper, and I talk more about specifically the language-learning activity in this post).

Knowing a little about students’ lives helps me schedule my office hours. Full time students can do weekday office hours. Folks who are working usually can’t. Parents can do evening times after the kids are asleep, but 8 AM is a bad time because they’re getting kids ready for school.

Finally, I ask students about their hobbies, passions/concerns, and future plans to prime them for bringing these into the classroom. I ask them to implement certain features in an app, but they get to choose the subject matter. Students feel more motivated to get their features right when the features pertain to something they care about.

Reading responses to this survey also helps me connect topics on students’ minds to class activities. For example, I have them add features to an app that I start for them, where they cannot individually choose the subject matter. I chose mental health as the subject matter for this app because, it turns out, most people who take my class are interested in mental health.

Students also receive two surveys (at least) per class.

 1. At the beginning of class, I send a “homework time spend” survey. 

I want to keep students under a certain number of hours of homework per week. I talk about that right here, so I won’t bore you with it again. The surveys help me identify problems with the homework where students spend too much time for too little learning stimulus.

Example: I have them make a video of themselves explaining a feature of an open source app. I used to send students a community list of two hundred open source apps to choose from. The problem: most of the apps either didn’t fulfill the requirements of the assignment, or they didn’t actually work. Students spent hours trying to get any app from the list to run. This wasn’t the point of the assignment at all. After that, I curated a smaller list of appropriate, functioning apps for the assignment.

 2. At the end of each session, I send a “session techniques survey.”

These surveys are my “secret sauce” as an instructor, which shows you how secret it is because here I am writing about it on the internet.

I experiment with different teaching techniques in the first few sessions of the quarter, and then I ask the students specific questions about those techniques. Toward the beginning of the quarter, it tends to be “click an option” questions like these three:

This slideshow requires JavaScript.

At this point, I can benefit a lot from having a rough idea of what time breakdown of activities are going to help them. Every class is a little bit different, so I adjust according to what works for most people each session and then fine-tune that for people who are not in the majority (more on that later).

Toward the end of the quarter, I ask open-ended questions on the surveys:

Screen Shot 2020-08-21 at 10.30.46 AMNow that we have an established relationship and students are more invested in the class, they answer these thoughtfully. In fact, they come up with a lot of the ideas that lead to innovative and effective class activities!

Even though I don’t put required, open-ended questions on these surveys until later in the class, every Session Techniques Survey includes an optional “Additional Comments” box. Here’s a selection of responses from that box on the very first Session Techniques Survey:

Screen Shot 2020-08-21 at 10.26.54 AM

There are a couple of additional advantages to the Session Techniques Survey.

  1. The questions can be extremely specific. I am focusing students, not on my teaching as a whole, but on this specific activity. This gives me more granular information about what’s working and what’s not than the end-of-quarter surveys distributed by the University, which cannot be so specific since the same survey has to work for every class and instructor.
  2. The answers are timelier. Students just did the activity, so they remember exactly how it felt. Also, students know that their responses will benefit them during future sessions of this class, not just theoretical students of future quarters. Occasionally I send students an email summarizing what I have learned from their responses so far and how those have affected the class.
  3. The end-of-quarter surveys contain almost no surprises. This is critical at the University of Chicago because our quarter schedule is ten weeks on, one week off, repeat. One week would not be enough time to adjust the class if it needs major adjustments that I find out about at the end of the quarter.

On occasion, a session will include a third survey before an activity.

Earlier in this post, I said:

Every class, in my experience, is a little bit different, so I adjust according to what works for most people each session and then fine-tune that to make sure that the format also works for people who are not in the majority (more on that later).

This is where the fine-tuning happens. The classical example: would people prefer to experience new techniques for the first time by exploring on their own, or exploring with a pair? There are people who love to do it one way and really struggle the other way—in both directions. I used to solve this exclusively by switching around: one activity is alone, the next activity is in a pair, the next in a small group, et cetera. I still do this.

But sometimes, I give students a choice.

Example: I’ll send out a survey before the 12 minute break (it’s a 3 hour class). It asks:

  1. What is your name on Zoom?
  2. The next activity is X. Would you like to do it alone, or in a pair?

Then, while the students are on their break, I make breakout rooms in Zoom. All students who want a pair get paired with someone in a breakout room. Then, when the students return, I tell them, if you wanted a pair, when you get a breakout room invitation, go to the room. If you wanted to do it alone, don’t go to a breakout room: stay in the main room and turn off audio. Then I ambulate among the rooms to check on people, and I check our class Slack channel for questions as they do the activity.

There is one final survey.

Students opt into this survey during the last class session, and I send it one year after the class has ended. It asks:

  • What have you ended up doing so far? What do you want to be doing?
  • Have you noticed yourself using any skills from our class?
  • Do you remember never having used something we spent a lot of time on in class?
  • What do you wish you had learned before you ended up where you are now?

With this survey, I gauge long term effects.

The first question echoes a similar one from the pre-course survey. I’m curious to see if the answer changes—like if one student wants to go work in banking, but then they end up deciding they’d prefer a job in public service.

I use responses to the second and third questions to confirm or challenge my hypotheses about which parts of my class help people after it’s over.

Things I have been right about: people use what they learn about version control. They often also use what they learn about automated testing.

By the way, I found out about automated testing from responses to the fourth question. I use automated testing all the time in my job, but I also started my career in a TDD cult, so I have a skewed perspective on how weird that is. I didn’t emphasize it in my first iOS Development class.

A year later, while I was developing Mobile Software Development, two thirds of respondents to the one-year-later survey said “I wish any class had gone over automated testing before I graduated.” MSD covers automated testing on iOS. Then it covers it again on Android, and it introduces TDD for fixing a bug. Finally, students work in groups on an activity that asks them to use multiple tests to drive out a feature with some non-trivial logic. Multiple people have gotten into their full time jobs and come back with “I am SO glad we did that.”

Effects I didn’t expect in the one-year-later survey: people use the risk analysis exercise later. I assumed that was something they’d do once and forget about.

Effects I expected (eh, more like hoped for) that aren’t there: the ethics material. So far, this doesn’t stick the way I wish it did. I get it: capitalism is a hell of a force.

I think solving this will require finding practical ways to address the imbalance of power between individual contributors and large corporations. Some of that is systemic—so we address it with collective action over time. Some of that is individual—for example, I’d need to give students tools for challenging the status quo that mitigate the perceived (and, to be frank, the actual) risk of losing their jobs for doing so.

Conclusion

This post is long. The gist:

  • I send students a survey before the start of class. I use it to:
    • Get their names (with pronunciation, so I don’t white-lady-butcher anybody’s name) and their pronouns.
    • Find out what programming languages they know, so I can prepare for an initial segment that uses this information.
    • Find out about their lives, which helps me structure the course.
  • I send students a survey at the beginning of every class. I use it to:
    • Make sure the homework is taking the appropriate amount of time.
  • I send students a survey at the end of every class. I use it to:
    • Find out what breakdown of activities will help the class learn the best.
  • I sometimes send students a survey in the middle of class. I use it to:
    • Tailor activities to different learning styles.
  • Finally, I send students an opt-in survey a year later. I use it to:
    • Gauge whether the students’ plans for themselves change.
    • Find out which lessons from my class were most useful.
    • Find out which lessons from my class were least useful.
    • Find out which lessons are missing from our curriculum.

These surveys drive most of the changes to my syllabus from quarter to quarter.

You can see all my posts about teaching by visiting the teaching category right here. You’ll find pieces about designing a syllabus, choosing topics to cover in a session, and selecting session activities.

If you liked this piece, you might also like:

The learn a new programming language post I mentioned above

The Leveling Up Series (a perpetual favorite for gearheads like yourself)

The Books Category—blog posts where I reflected on a book

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.