Feedback at HBS

A theme of the HBS has been the importance of self-awareness. The entire FIELD course works on trying to develop the sort of personal skills that aren’t applied by the case method. Part of that was two entire class days at the beginning of the term were devoted to “communication, voice & self awareness.” A big piece of that has been the importance of giving and receiving feedback.

If you aren’t familiar with HBS’s curriculum, everything else is done via the case method. A “case” is a long story with a problem that the protagonist needs to solve. We’ve done 7–12 of these per week across our subject areas: Accounting (FRC), Finance (FIN), Technology and Operations Management (TOM), Marketing (MKT) and Leadership and Organizational Behavior (LEAD).

The 94 of us arrive (on time) and sit (in our assigned seats) and the professors helps moderate a discussion for 80 minutes. One person is usually “cold-called” — asked to open the discussion by summarizing the situation and giving their best recommendation for what the protagonist should do. Then everyone else jumps in to advance the conversation. This participation makes up about half of our grade in each class.

Participating in section is much harder than anything I’ve done at work. Harder than daily work, and also harder than key presentations. Harder than speaking at a conference. Harder than teaching. I’ve hit my metaphorical “wall” more times than I would care to admit.

Class is an exercise in balancing:
- What do I bring to the class to this conversation that’s unique and interesting?
- How can I make that point in 30s or less, in a way that will help my classmates understand?
- When is the best time to raise my hand to try to fit that into the flow of our conversation?
- What cases do I want to contribute to?

We’re now mid-way through the first semester, so feedback on participation has started to roll in.

Our two days of conversation on self-awareness and how to give effective feedback covered a lot of ground: give it in person, start from shared ground, give it as close to when something happened as possible, give specific examples, explain the impact of the behavior, provide actionable ways to improve, etc.

In contrast, the mid-term feedback comes as a form letter. Our form letter groups us into one of the three buckets — strong, acceptable, needs improvement. These buckets track primarily along frequency of contribution. This seems odd, as the message at the beginning of the semester starts with “quality matters more than quantity, but quantity also matters.”

It’s easy to be critical, but based on my experience with the feedback, and what HBS taught us about feedback, I think the feedback is more harmful than it is helpful.

My first feedback was an “acceptable” which prompted 0 reflection or improvement from me — thus automatically making it not very useful. I heard similar things from friends with acceptable/strong ratings, so I’m willing to bet about most of the class won’t change anything as a result of their midterm feedback. That doesn’t seem like a good way to improve our classroom discussions.

The second was a “needs improvement,” which is the most actionable of the buckets. It was helpful in as it forced this meta-reflection about feedback.

As mentioned, the “Needs Improvement” feedback is about quantity for most people. Given that, a histogram of “number of comments” for each student would be much more objective. I know how many times I’ve talked, but I have no idea how many times other people have. That would give much better measurement for self correction. Data would feel a lot less personal, and I bet would make people more willing to take the feedback. It would give me a much more compelling reason to raise my hand than this form email that says “raise your hand.”

I think that data could help across the spectrum. An individual with fewer contributions could see how big the gap really is. An individual grouped in “strong” could distinguish between “high quality” and “tons of comments.”

This feedback also isn’t timely. There’s a big difference between week 0 and week 8. I’d be curious to see the graphs of participation over time. I’m willing to bet a lot of the “needs improvement” people have an upward trend. I’m also willing to guess a lot of them are introverts who like to understand the situation before jumping in.

Frankly, no one is unaware that they haven’t been participating. So why give negative feedback in the middle of what is a possible upward trend? That’s discouraging for no reason — the form email even acknowledges that it’s discouraging.

I think sporadic, but more direct personal feedback would have much more impact on the classroom experience. Having a professor drop by my chair before class and say “hey, I’d really like to see your hand up today” would make me raise my hand. Alternatively, a one-off “really enjoyed your comment” today would make my day. It’d be far more encouraging than an “acceptable” or “strong” form letter — and would encourage more comments of a similar type.

As of now, the only relevant action from midterm feedback is “raise your hand more” or “keep raising your hand.” We all already knew that, because they’ve been saying it since Day 1.

The current feedback system is in an uncanny valley of pretending to be personal feedback, without adding insight for students. My hypothesis: either an objective system, or an informal but personal system would have a greater impact on the classroom.