*Editors’ note*: This is the second of a series of blog posts on the state and history of the University of Michigan’s undergraduate mathematics program. Calculus reform came to Michigan in the early 1990s—closing in on 25 years ago. So this is another installment in a “25 year retrospective.”

“…Don’t you realize now, what you see is me

Tell me what you see.”

–The Beatles

Observations of teaching may occur in many contexts: in a new instructor training program, in support of reference letter preparation, in the evaluation or review of an instructor, to provide other instructors insight on different teaching styles, and more. In our Introductory Program courses, by which we mean here our course before calculus, calculus I, and calculus II, we have regularly done classroom observations for new instructors as part of our instructor training and support program. The manner in which these have been done has changed slightly through the years, but overall retains the goals and general structure that it had when Michigan started with “reform” calculus in the 1990s.

Our observations of teaching include *class observations* and *feedback visits*. In both cases the primary goal is to provide formative evaluation, that is, to promote the instructor’s improvement as a teacher. We describe each of these below, provide some of the historical background of how they have been used, and give some of the logistical and administrative details connected with their use.

In both *class observations* and *feedback visits* an observer visits an instructor’s classroom. In the former they observe only, while in the latter they both observe and solicit feedback from the students in the class.

Our classroom observations are initiated by the observer, who suggests a couple of possible days for the observation to the instructor, and they agree then on a mutually agreeable class for the observer to visit. Who is doing the observation has changed over time; currently, observers are course coordinators, program directors, graduate student co-coordinators, and/or experienced graduate students with experience equivalent to that of a graduate student course co-coordinator. On the day of the observation, the observer arrives and sits somewhere in the classroom that is ostensibly out of the way, and observes the full class period. The observer takes notes on what the instructor and students are doing at different times in the class period, where the instructor is, who is asking questions and who is answering them, what the instructor writes on the board, significant points or events in the class, etc.

Frequently, the observation notes include a column (or page; in Figure 1, the all caps text) showing what the instructor wrote on the blackboard; a column (or page) highlighting what they say and interactions with and between students (in Figure 1, the lower-case writing); and a map showing where students are in the class, which of them are contributing in different manners, and the instructor’s movement between them. An example of (part of) a set of observation notes is shown in Figure 1, and an example of a map of an instructor’s movement in the classroom during an observation is shown in Figure 2. Of particular interest in the observation are characteristics of the class such as: who is doing the mathematics (is the instructor lecturing, or are students actively engaged with the material?), who is asking questions (the instructor, a handful of students, or many of them?), and who is answering them (the instructor, one or two students, many other students, or group members?).

Following the observed class, the observer will review their notes and thoughts on the observation, and formulate a short summary to be given to the instructor. This summary will include information about the class that was observed, specific aspects of the course that were positive (“what is going well”), and comments on how the lesson could have been improved (“suggestions”). Formulating this summary generally takes the observer on the order of an hour. The observer then meets with the instructor to review how the class went, go over the summary and other observations from the observer, and answer the instructor’s specific questions about that class and teaching in general. This meeting also usually takes on the order of an hour.

Feedback visits were developed by Beverly Black at the University of Michigan[1], who at the time worked with the University’s Center for Research on Learning and Teaching (CRLT)[2]. The initial portion of the visit is similar to that of a classroom observation, with the observer arranging a time to visit and taking notes while attending the class. However, in the last 20 minutes or so of class the instructor leaves the room and the observer solicits specific student feedback from the class about what is going well and what could be improved. This is done in a structured manner.

First, the observer introduces themself, notes that all comments that are gathered will be anonymous (the observer will type up responses), and that the point of the visit is for the instructor to gain students’ feedback on what they can do to improve students’ learning for the rest of the semester.

Following the introduction, a feedback form (Figure 3) is distributed to groups of four students. The instructions are to have a recorder in the group write down comments that all members of the group agree on, in two categories: major strengths of the course; and changes that could be made to improve the learning environment. Students are encouraged to be as specific and practical as possible, providing concrete examples where they can.

After students have had the opportunity to formulate their thoughts, the observer moderates a full-class discussion and writes on the board suggestions in both categories on which there is a consensus. In some cases, a comment may be written on the board with an indication of the degree to which there was some disagreement with the statement.

At the end of class, the observer records (or takes a picture) of the full-class comments, and erases the board. After the visit, the observer prepares a summary document similar to that created for a classroom observation, indicating all of the comments (both positive and constructive) on which there was consensus. Where appropriate, and with annotation, comments that did not have uniform support may be included as well. As with the class observation, formulating this summary generally takes on the order of an hour.

Once the summary is created, the observer meets with the instructor and discusses the summary, answers questions, and provides closure on the visit. This meeting also usually takes about an hour.

Pat Shure, who directed the Introductory Program as it was reformed in the 1990s, reports that new instructors were observed as they started teaching from the start of our calculus reform, and possibly before[3]. Starting with the implementation of calculus reform, observers were both faculty in the department and staff from CRLT, and the observations were feedback visits which were tied in part to the evaluation of the revised courses[4]. CRLT did a large number of these visits until about 2004, when the workload associated with supporting them proved to be in excess of their resources (though they continue to have a program that provides a class observation for any faculty member requesting one). Since then, the Department has done all class observations for new instructors in mathematics.

In the mid-2000s, all new instructors in the Introductory Program had two class visits from the course coordinators, co-coordinators, and supporting faculty in the Program[5]. The author’s recollection is that one of these was a classroom observation while the other was a feedback visit (this is consistent with CRLT having provided feedback visits as our reform calculus was implemented). In 2007, for example, there were 38 new graduate student instructors and 5–6 new post-doctoral and visiting faculty who were each visited twice, for a total of about 90 visits. These were done by seven observers, who were course coordinators and co-coordinators, and three other faculty associated with the Introductory Program.

Providing this number of visits became unsustainable without additional resources, however: the visits in 2007 required an average of about 20 hours a semester from each of the observers for the class visits alone (our Introductory Program courses meet for 80 minute class periods)—and a total of on the order of 40 hours once time to generate summaries and meet with instructors was included. For lack of additional observers, by the end of the first decade of the 2000s new instructors were getting a single classroom observation visit, a practice that continues to the present. In cases where the classroom observation or other data suggest that there are challenges in a new instructor’s classroom a second observation will be done, and when there are issues in an instructor’s first semester of teaching another classroom observation is done early in their second semester of teaching.

Since 2015, all Introductory Program mathematics courses have been taught in sections of 18 students, which has significantly increased the number of new instructors requiring class observations. To be able to address the additional observations that are required, observers now include the course coordinators and co-coordinators (eight faculty and graduate students in the fall semester) and several additional experienced graduate students who are hired to assist with the observations.

As noted above, CRLT still supports feedback visits for instructors who request them, which are available in addition to these Department observations.

Managing the logistics of the classroom observations is a task unto itself. Historically the Director of our Introductory Program has assigned observers to instructors and determined when observations should start. The observations are done early in the semester—so that the feedback from the observation can be used by the instructor to improve their teaching. The Director additionally works with our administrative office to hire the non-coordinator observers, and is responsible for training all observers. The training of new observers involves the Director meeting individually with each observer, explaining the logistics and expectations for the visits, summaries and instructor meetings. As these observers have been observed themselves when they were starting their teaching in the Department, they know about the general structure of the program from an instructor’s perspective.

The observers are then responsible for scheduling their visits, writing up their summary, and meeting with the instructors to discuss the observation. When an observer determines that there are potential issues in a classroom, they follow-up with the Course Coordinator and Director to ensure that a second visit or other appropriate action is taken. The Department also maintains a simple database system to keep track of the outcome of the observations. This allows observers to enter general descriptive information about the visit (how well logistics were managed—board work, class control, etc.), as well as comments and suggestions that the observer provided the instructor. This is primarily to allow the Director access to these data without having to find a given observer.

There are a number of aspects to this type of large-scale observation program that do not fall in the category of logistics, *per se*, but are practical considerations that merit note. These include the management of instructors’ concerns and students’ expectations.

In many parts of our instructor support, we find that our goal of formative evaluation, providing feedback to (allow instructors to) improve, is often clearer to those who run the program than it is to those who are new instructors in the program. Throughout the training program there is a tendency for instructors to view the evaluation as summative, grading them to possible ill effect. Accordingly, we work to communicate to instructors that the observations are not an evaluation of their teaching, and that they instead serve to help them improve their teaching. This is bolstered by the fact that instructors uniformly find the observation process helpful.

Management of expectations is also important for feedback visits, in which students are being asked for their opinion on the instruction in their section. In this case there are many things that are specific characteristics of the course which students may otherwise feel the instructor is imposing on them. For example, our Introductory Program courses include team homework that is worked on by groups of four students, and the courses are conceptual and require significant work and thought. These are things at which students frequently bridle, but are beyond the instructor’s control. Therefore, we often begin the discussion of the feedback portion of such a visit with a framing of the course and what instructors do and do not have the authority to change.

In addition to the classroom observations that we do for all new instructors in our Introductory Program, we provide a second feedback mechanism that provides some of the same information afforded by the feedback visits. At about the midpoint of the semester, after the first midterm exam, we provide to each class-section an on-line survey that includes the same prompts and questions as the University administered teaching evaluation that is given at the end of the semester. The feedback from this is, for each instructor, reviewed by the coordinator of the instructor’s course before being released to the instructor. In those rare cases in which the student feedback suggests significant issues in a class section, the coordinator will meet with the instructor to provide insight and feedback on the nature of the comments and suggestions for changes the instructor should make. Similarly, the responses to the midterm evaluations may suggest an additional classroom observation should be scheduled for the instructor.

It has been clear from the start of our “reformed” calculus program that the training and support we provide for our (new) instructors are essential for its continued success. Our new instructor training is a topic unto itself, and runs for the week before our classes start in the fall. However, it is clear that training before the semester without additional support is insufficient to sustain the forward instructional momentum that we aim to initiate when training.

What does good teaching look like? In our classrooms, it is active and noisy, with an instructor who is providing their students with similar guidance to that which we seek to provide the instructors. Good teaching looks like good learning: participants are actively engaged with their subject (or subjects), doing work to understand it (or them), and struggling productively with what is necessarily an ongoing and difficult task. In the vernacular of our School of Education, this struggle with teaching and learning what we need to know to teach better is also mathematics.

- Black, B. 2015. Personal communication.
- CRLT. http://crlt.umich.edu/. Accessed 31 May, 2018.
- Shure, P. 2018. Personal communication.
- Brown, M., A. Taylor, & P. Shure. 1992. A New Calculus Program at the University of Michigan. NSF Proposal. DUE-9252503.
- Personal recollection: the author has a spreadsheet scheduling two visits for each new instructor in 2007.

Our version of this course in the past appears to have been rooted in the traditional “recipe-driven” course, with evolutionary changes moving in the direction of a more conceptual, more demanding course. Resource constraints require that it be taught in a lecture of on the order of 100 students. The lecture meets three times a week for a standard 50 minute period. For very many years—since at least the mid-90s, and possibly earlier—there has been an extra hour a week associated with the course which has been in some weeks a computer lab and in others a standard recitation. The students in this course are 75–85% from the College of Engineering; the Department offers other courses in differential equations for other clientele (*e.g.*, a course that assumes that students have had a course in linear algebra and introduction to proof writing).

The original computer labs for the course were developed in the early to mid-90s and centered around Euler’s method and some applications; they underwent a major redesign in the late 1990s.[1] Those were the basis for the labs that were in use for the ensuing 10–15 years—let it not be said that our work does not have an ongoing impact! Between then and 2016, the labs evolved slowly, and largely in the direction of requiring less student knowledge of *Matlab* (the package used for the labs, as all engineering students at the University learn and use it) and, arguably, less conceptual engagement. Students had five sessions they spent working on the (5) labs, and the remaining weeks met in recitation, in which expected recitation type activities (question answering, material clarification) took place.

Common wisdom is that the course as it was before revision was perceived by students as easy and, as it was straightforward, a “good course.” An unscientific survey of student comments on ratemyprofessor.com[2] bears this out, finding it characterized as “the easiest of the required engineering math classes.”[3] Feedback on the labs in teaching evaluations suggested that students saw them as little more than hoops to jump through that didn’t connect to the course as a whole. Recitations, not surprisingly, were regarded as much more helpful. One might suppose that it’s difficult not to like a space where one is told the answers one seeks.

At the end of the day, there is an increasing and persuasive body of evidence that indicates that if we are going to make a (mathematics) course effective, in the sense of engendering student learning, we must get students actively engaged with the material.[4,5,6] It is clear from the preceding discussion that the course was not necessarily doing a good job of this before its revision. Active engagement is difficult when the primary instruction takes place in a large lecture, and even the labs and recitations were not well structured to ensure active learning.

We had some advantages and some disadvantages when planning the revision to this course. It is coordinated by a course coordinator, who is able to set the textbook, syllabus, course materials and assignments, and exams. This allowed us to redefine the course as we would like—subject to maintaining support for the material needed by the engineering students who are the vast majority of those taking the course—which is a significant advantage. The two significant disadvantages are in its tradition of allowing instructors independence in the classroom and its format—logistically and politically we could not change it to a small-section course. The net impact of these constraints was that the lectures were likely to remain lectures, except insofar as individual instructors sought to bend them to be more interactive or engaging.

Accordingly, our work on the course focused on the *course materials* and *lab sessions*. Expectations may help define what students learn, and the lab sessions/recitation are the place where we could push greater interactivity and engagement.

Further, we have one, very positive, data point supporting an approach to the course on this path. In 2014 we revised our Calculus III (multivariable and vector calculus; Stewart[7] chapters 12–16) course. This course is taught in the same format as this differential equations course, and our revision involved updating the computer labs in which the students worked on their labs from the static classroom rows shown above to an interactive environment shown to the right. At the same time the lab materials themselves were rewritten to be more substantial and conceptual and to require greater student collaboration. To assess the impact of this change we did pre-/post-testing with an internally written test measuring students’ understanding of the course material. Students’ pre- to post-test improvement in the course after we had implemented the revision was dramatically better than before. We will be the first to caution you not to believe any study with , especially one that isn’t published, but, still, .

Our goals in revising the course were:

- to increase students’ engagement in the course,
- to extend the material, and to make better connections between different course material within the course and between the the course material and that from other courses,
- to improve students’ understanding of the connection between the labs and the course as a whole, and
- to update the course to have a more “modern” approach to differential equations and have a greater emphasis on conceptual understanding.

These objectives, and the changes described below, were evaluated and approved by the College of Engineering’s Director of First Year Programs and their Undergraduate Program. To accomplish them, we revised the course syllabus and adopted a new textbook[8], and updated the assignments and labs. The book itself we found to be stronger mathematically than that which we had been using, and it takes a “systems first” approach, introducing systems of two first-order equations immediately after treating first-order differential equations. Completely new labs were written in the course of the summer of 2016 by a post-doc funded by a gift from *MathWorks*. The new labs were more demanding, strongly application based, and written to require that students work in pairs and fours to complete them. Other homework and exams were also revised to be more demanding and conceptual.

These changes were all implemented in fall 2016, with admittedly somewhat rough (“beta”) versions of the labs. As part of the implementation, beginning mid-semester we held a weekly meeting with the graduate students teaching the lab sections of the course, which proved invaluable for figuring out what was really going on there, what was working well, and where there were issues.

Each of the new labs has an application around which the work revolves.

Lab | Model | Mathematical Goals |
---|---|---|

1 | Gompertz model for the size of a cancer tumor [e.g., 8, p.27] | Introduce series approximations to solutions of differential equations and the linearization of nonlinear equations by Taylor expansions. |

2 | The van der Pol oscillator, an active RLC circuit [e.g., 8, p.500–504] | Introduce systems of differential equations and phase plane analysis, linearization of nonlinear systems, and the differences between linear and nonlinear behavior. |

3 | A model for a ruby laser [9] | Introduce systems of more than two equations, examine the behavior of second-order equations and response to sinusoidal forcing. |

4 | A chemical equations model [10] | Explore numerical methods for the approximation of solutions and the effect of stiffness on approximation error. |

5 | The Lorenz equations [11] | Examine linearization and nonlinear behavior, explore bifurcations and see chaos. |

Because it’s fun to see the equations that these involve, they are included below:

Lab | Model |
---|---|

1 | |

2 | , |

3 | , |

4 | , , |

5 | , , |

Each of the labs is structured to have a strong emphasis on collaborative, engaged work. They are completed on a two-week cycle. In the first week, students have a pre-lab due that requires them to work through some basic mathematics related to the lab. The goal of the pre-lab is to introduce the lab and give a context for it. In the lab period, students in pairs from a team of four to work on *Matlab* exercises that explore the concepts in the lab. The exercises worked on by the pairs are similar, but address slightly different, complementary, aspects of the material or problem that is being considered. The following week, students use the work that both pairs did as they continue with a second set of *Matlab* exercises. At the beginning of the lab period after the second workday the team submits a lab writeup that explores the mathematics and meaning of the work they did in the previous two weeks.

The biggest change to the course was in the restructuring of the labs. However, there were other changes, the most significant of which were in the course material and its organization and in the assignments and exams.

The course material pre- and post-revision is summarized below.

# | Topic (Pre) | Topic (Post) |
---|---|---|

1 | first-order equations | first-order equations |

2 | higher-order linear, constant-coefficient equations | systems of two linear first-order equations, phase plane analysis |

3 | systems of linear first-order equations, phase plane analysis | second-order linear, constant-coefficient equations |

4 | systems of two autonomous non-linear first-order equations | Laplace transform techniques |

5 | Laplace transform techniques | systems of two autonomous non-linear first-order equations |

The effect of the changes is really two-fold: it emphasizes phase plane analysis and a dynamical systems perspective when considering differential equations, and it means that material students find more difficult is more evenly distributed throughout the semester. The more difficult topics in the syllabus are 3–5 in the old syllabus, and 2 and 4 in the new one. The treatment of systems and the phase plane early in the new syllabus makes the analysis of nonlinear systems much less formidable. Both of the effects of these changes are very valuable, though the effect of the rearrangement of the material wasn’t fully appreciated until after we got into the first semester teaching with the new syllabus!

The second change, in assignments and exams, resulted in them being more conceptual and, demonstrably, more challenging. The number of written homework assignments was decreased (from 10–11 to 5), but the problems were made significantly more substantial. Problems on the two midterms and final reflected the more conceptual emphasis in the course, and were also significantly harder (the median of exam scores over the two semesters preceding revision was 80%; in the first semester post-revision, 61%).

*Student evaluation*

Perhaps the most plainly obvious result of the revision was that students didn’t like the course becoming more challenging (which it did). Instructors’ teaching evaluations almost certainly reflected this, and many comments on evaluations expressed concern about the demands of the course (now “…by far the most difficult” of required math classes for engineering[3]).

That said, while course expectations were significantly higher, students largely rose to them. Students’ perceptions of the lab materials, as measured in a survey at the end of the semester, were not positive—but we have no comparative data with the labs they replaced, and we suspect those would have been evaluated similarly or worse at helping students learn the material and having connection with the course material. That said, students reported that the labs improved their ability to use *Matlab* (41 responses out of 124 total), demonstrated connections to real-world applications (18 responses), aided in visualization of the studied mathematics (17 responses) and improved their ability to work in teams (7 responses). The graduate students teaching the labs felt that students generally understood the labs and the mathematics in them, though some students got hung up on the difficulty inherent to programming in a computer environment like *Matlab*.

We do not have a direct way of measuring student learning before and after implementation of the course revision, because the timeline of the revision didn’t allow doing any sort of assessment before we implemented the changes. However, we do have some measures by which we may assess the impact of these changes. There is no doubt that the new lab materials substantially increase students’ engagement with the material, as well as the amount of time they spend on collaborative work, both of which are essential for improved learning.[4,5] There are substantially better connections between the materials and the course, even if these need greater emphasis for students to see them. And it is clear that the use of applications and overall conceptual focus of the course increased.

So, was it all effective? It’s hard to determine the actual impact on student learning, which is the real goal of all that we are doing here. There was clear and loudly voiced concern from students that the course changed suddenly and for the worse (one student wrote on the final exam “You’ve ruined this course. I hope you’re happy!”).

But at the same time I think we have to have some faith in our understanding, as educators, of what we are doing. The inclusion of more engaged learning in any course must be a good thing, and an increasing focus on conceptual understanding is at the end of the day aligned with our sense of what is important.

Effect? We have effected more engaged learning and a better overall course focus. Is that effective? In the long run it must be.

[2] Rate My Professors. <http://www.ratemyprofessors.com/>, accessed 2/8/2017.

[3] Peter Gavin LaRose at University of Michigan. (Rating post date 11/29/2016) <http://www.ratemyprofessors.com/ShowRatings.jsp?tid=156282>, accessed 2/8/2017.

[4] Freeman, S., et al. (2014). Active Learning Increases Student Performance in Science, Engineering and Mathematics.

[5] Kogan, M. & S.L. Laursen (2014). Assessing Long-term Effects of Inquiry-based Learning: A Case Study from College Mathematics.

[6] Conference Board of the Mathematical Sciences (2016). Active Learning in Post-Secondary Mathematics Education. <http://www.cbmsweb.org/Statements/Active_Learning_Statement.pdf> (July 15, 2016), accessed 2/8/2017.

[7] Stewart, J. (2012)

[8] Brannan, J.R. & W.E. Boyce (2015).

[9] Erneux, T. & P. Glorieux (2010).

Press, Cambridge.

[10] Hairer, E. & G. Wanner (1996),

[11] Sparrow, C (1982).

We did this in two ways — first, with large multi-week group projects, and second with classroom demos using real data and *MATLAB* or *Mathematica* scripts. For the first projects, we chose the applications and walked them through the computation. For the second, we challenged them to go out into the world and find us a real application, which they presented in a packed poster fair on the last day of class.

Afterwards, some other instructors told me they were surprised at how many examples I found. So I wrote up a quick note with my favorite examples. Here is a cleaned up version of that note. It mixes together in class demos and group projects, as I think most of these could work at either length.

One thing I’d like to do better in future terms is see if I can find a way to have the students do in-class experimentation with some of these ideas. When I brought this material into class, I was using my laptop and premade scripts on the projector, taking questions and ideas for what sort of computations to run, but still fundamentally controlling the presentation. The big challenge is that, although most of the students have used MATLAB, and many are CS majors, most of them are not actually fluent coders who can think of an idea and immediately make a MATLAB script to do it. (The small challenge is getting enough laptops and desk space, but I believe I could solve this one.) Still, it would be great if we could.

One deficiency of this list is that I don’t have a lot of good applications for the first third of the term — solving linear equations, reduced row echelon form, subspaces, bases, dimension. I hope I’ll solve this in future terms.

**Discrete Fourier analysis** I pointed out that the matrix of regularly sampled sines and cosines is orthogonal (with appropriate weighting). I justified this heuristically, pointing out that the orthogonality was easy for the integral version. I projected 10 years of Detroit weather data onto its leading Fourier terms.

**Least squares** I fit MSU tuition data to a quadratic. (Remember to correct for inflation! ) By the way, I would like to scowl at UMich for not making this data comparably easy to find.

Least squares models are omnipresent in statistics, so you can choose any example you like. I wanted to stay away from best fit lines, though, because I wanted to make the point that you can use least squares with a model of the form ; the nonlinearity in is not a problem. I

think most students take a while to understand this, but it is good to make them.

This example was a short one, so I also used it to try to convince students that they had the tools to get at real datasets and work with them. Of course, the better way to do this would be to make THEM seek out a real data set…

**Eigenvalues and oscillations of mechanical systems** Take three masses of mass on a line, with masses 1 and 2, and masses 2 and 3, joined by a spring of spring constant . We get a system of ODE’s

Guessing a solution , you find that is an eigenvalue and is the eigenvector. After I worked this small example, I talked about oscillation of molecules and of large mechanical structures (via the finite element method). I returned to this example in when we talked about quadratic forms, where I could point out that the matrix here is symmetric, and the corresponding (positive semidefinite!) quadratic form is the energy stored in the springs.

**Demography and dynamical systems** I took the data from page 39 of the Pew report and investigated how the distribution of religious demographic groups in the US would change over time in terms of eigenvectors. This was basically replicating 538’s analysis, but was (I hope!) more interesting than talking about rabbits and foxes.

Thinking about these issues also gave me a new way of thinking about society. We tend to think about different segments in society as vying for complete dominance — will we be liberal or conservative? religious or secular? Any division seems a temporary state, simply waiting for one side to dominate. But, if your matrix has off diagonal terms, then the equilibrium state will actually be a mix of populations. You don’t *need* math to see this, but I feel like I only got an intuition for it by seeing the math.

**Random walks** I took a couple of street networks and showed how a random walk in them would spread out in terms of eigenvalues. I showed how the eigenvalue gap controlled the connectivity of the graph.

**Spectral graph partitioning** David Gleich has a great intro to this method for finding clusters in a graph of connections. I started with some artificial examples and then finished with this dataset of links between several hundred blogs. (That data is a decade old now! I’ll need to get fresher data next time.)

I also did some examples of spectral graph drawing.

There are tons of great examples of graphs available for download; check out these links 1, 2, 3, 4.

**Principal component analysis of statistical data** I used Cosma Shalizi’s file of 17 features of 390 cars and extracted principal components from it.

Again, it would be great to make the students do something like this instead!

**SVD for image compression** I took a black and white bitmap and showed them how different numbers of singular values made the image come in. I’ve been told that this is a fake example, and no one actually does image compression this way. That’s too bad, because it is beautiful watching it work.

**Optimal portfolio allocation** I didn’t do it this term, but other times I’ve taught about optimal portfolio allocation. The problem is the following: You can distribute your money among various investments whose results are normally distributed, and correlated with each other. You want to choose a vector which maximizes expected return while taking a fixed level of risk. The problem is to maximize a linear functional with respect to a quadratic constraint . The answer is that you should pick proportional to , and it is a nice example of the power of changing coordinates.

*Editors’ note*: This is the first of a series of blog posts on the state and history of the University of Michigan’s undergraduate mathematics program. Calculus reform came to Michigan in the early 1990s—closing in on 25 years ago. So this is the first installment of a “25 year retrospective.”

“In the new approach, as you know, the important thing is to understand what you’re doing rather than to get the right answer.”

–Tom Lehrer

What’s a gateway test? Our definition is that it is a test of basic skills. These may be skills that are prerequisite for success in a course, or may be skills which every student in a course should develop. For a pre-calculus course, we find that basic algebra skills and function manipulation are prerequisite skills; the canonical example of a skill intrinsic to a course is rule-based differentiation in calculus I. Of course, most calculators, Wolfram|Alpha, and many other tools can easily do the calculations that these skills allow students to perform, and we use calculators and other technology in our classes. So why should we test the skills? We do it because we find that students without prerequisite skills struggle in a course (even in the presence of technology), and we continue to believe that there really are skills (like differentiation in calculus I) that any student who has successfully completed the course should be able to do.

However, while these skills are intrinsic to or essential for success in the courses, they are not our educational focus. Our courses focus on conceptual understanding; e.g., for calculus I, what a derivative tells us, and how we can use this to solve problems—not how to find the derivative of . This conceptual understanding is fundamental to our courses, and it is therefore the subject of the bulk of class and instructor time, and it is what we evaluate on exams. Gateway exams are the mechanism by which we ensure that students also have or acquire the basic skills we expect of them, and we find that they work with minimal investment of time in-class [9].

Our gateway tests are 7 or 10 question tests, administered on-line (we use WeBWorK[3] as our homework and testing platform), which students may take multiple times and which they must complete with almost no errors to pass (we allow at most one error). Students may practice the tests as many times as they like, but to get credit for having passed the test they must take the test in a proctored lab where their identity is verified and where they are not allowed to use outside resources. The tests have a time limit of twenty or thirty minutes, depending on the test. Because the skills being evaluated on the gateway tests are those which we expect every student in the course to have or acquire, completion of the test is not a part of the students’ course averages; instead, any student who doesn’t pass a gateway by the specified deadline (students may take the tests in a 2–3 week window during the semester) has her/his final grade reduced by 1/3 to a full letter grade at the end of the semester.

This small carrot/large stick model plays well with student psychology. In general, students in a course will often assume that they can make up missed material later in the course by doing better on the final (or, not uncommonly, pleading with the instructor after the fact to be allowed to do extra credit). Because it is explicitly not possible to “make up” a gateway, or to get back the grade penalty by doing better on the final, there is a strong incentive for students to complete the gateways successfully.

We currently have five gateway tests and are adding a sixth in fall 2015. The “original” four are (1) an entrance gateway (that is, a test of prerequisite skills) for math 105 (Data, Functions and Graphs; a pre-calculus course), (2) a differentiation gateway in math 115 (calculus I), (3) an entrance gateway on differentiation and (4) an integration gateway in math 116 (calculus II). An entrance gateway on integration was added to math 215 (calculus III) in fall 2005. In fall 2015, math 214 (a linear algebra course taken by some engineers) will add a gateway test on matrix operations and row reduction.

Gateway tests as we use them at the University of Michigan appeared in the world of undergraduate mathematics education sometime before the 1980s. They were sometimes referred to as “barrier tests,” though this characterization appears to paint a half-empty rather than half-full glass. We prefer the term “gateway,” suggesting that they lead to something great. Duke introduced them in the 1980s as part of their calculus reform efforts [1], and the U.S. Military Academy was using them by the early 1990s [4]. The University of Michigan was pilot testing them for pre-calculus by 1992.[6] The tests in use in 1992 were slightly different from the test we now give, with five different tests of five problems each, which students were required to complete perfectly. These were pencil-and-paper tests initially generated by hand, and subsequently created with an “enormous” Lotus 1-2-3 [10] spreadsheet.[7,8] The first administration (or, possibly, two) of each test was done in class.[8] It is worth noting that for the first couple of years failure to pass the gateway test resulted in a failing grade for the course, though instructors were told to ensure that this didn’t happen unless the student would fail the course for other reasons anyway. Also when it was first implemented the deadline for completing the test was the end of the semester, which proved less desirable than fixing an earlier deadline.[8]

The pre-calculus gateway was revised, and a derivative gateway added to calculus I, in 1993 as the Department reformed its calculus sequence to create what was then called “New Wave” calculus. This was subsequently renamed “Michigan Calculus,” a moniker that we continue to use today. Gateways for calculus II were added by 1997. The calculus tests had seven problems and students were allowed at most one error, while the pre-calculus gateway had 12 questions and allowed at most two errors. We believe that the tests were generated using a modification of the original Lotus 1-2-3 spreadsheet, which created TeX files from a fairly expansive testbank which appeared larger than it actually was because the problems were numbered non-sequentially (e.g., 11, 12, 13, 14, 21, 22, 35, 36, etc.; problem numbers were printed on the tests that students took). When gateways were being taken, versions of the tests would be created and made available in the Math Learning Center (a.k.a. Math Lab: our free walk-in math tutoring center). At least some instructors would administer the test once in class, and students could take the test again, once per day, in the Math Lab, until the test came due.

The logistics of managing these tests were formidable. In fall semesters, our pre-calculus course had approximately 600 students enrolled; calculus I, approximately 1600; and calculus II, 800. Students taking gateway tests would come to the Math Lab and get a test from the student manning the desk at the front of the lab. The date and time would be stamped on the test when they received it, and they would complete the test at one of the tables in the Math Lab. When they had completed the test they would hand it in at the front desk, where it would be stamped with the date and time again and put in a box for grading. The student manager in the Math Lab, or the (faculty) Math Lab Director or the Introductory Program Director would grade the exams when they were not otherwise busy, and, in any case, not while a student waited. Grading was done with as close to zero tolerance for errors as possible, with a goal of having as fast a turn-around as possible.

This system worked, but in a somewhat qualified manner. Time constraints on grading had the unfortunate consequence that students would not get their graded exams back immediately, which could result in them taking the test again before finding that a previous test had a passing grade. One could imagine a situation in which a student in a class that met Monday, Tuesday and Thursday could take the test after class on Monday, not get it back on Tuesday—and therefore have taken a test on Tuesday, Wednesday, and Thursday (before class) before discovering in class on Thursday that they had in fact passed the test on Monday! And the Math Lab was frequently overwhelmed by the number of students coming in to take a gateway test and for tutoring, and was therefore neither a good testing environment nor easy to monitor. To (try to) address the grading issues, the tests were converted to multiple-choice, but this was done with a heavy heart and the (fulfilled) expectation that it would reduce the efficacy of the tests. The math 115 and 116 gateways were converted back to short-answer when gateway testing was moved on-line.

To address the issues noted above, the gateway tests were converted to an on-line format in 2001, moving through a number of different testing systems before settling on WeBWorK in 2005. Pilot testing took place in fall 2000 and winter 2001, allowing investigation of different testing software and of logistical issues with the on-line test. Full implementation of the on-line test occurred in fall 2001. Moving to an on-line test prompted three changes to the tests. First, when the test required students to enter their responses as a mathematical formula, we increased the time limit by 10 minutes, allowing 30 minutes for what had been a 20 minute pencil-and-paper test. This was intended to reduce students’ anxiety and complaints about getting problems wrong because of errors in mathematical typography that they might not have made when writing the problem by hand. Second, we allowed students to take the test twice per day rather than once, but required that they go to the Math Lab to review a failing test with a tutor before returning to take it again. Finally, we created a version of the gateway that allowed students to easily practice the test whenever they wanted.

Since the implementation of the on-line system, we have added additional gateway tests. Calculus III (math 215) added an entrance gateway on integration skills in fall 2005, and in fall 2015 a gateway test on matrix operations will be added to one of our linear algebra courses, math 214.

Do these tests work? There are a number of possible measures of this. We might first ask if students pass the tests; if not, it is unlikely that they are accomplishing anything productive. If students pass the tests, we might next ask if they appear to be trivial—do all students pass them immediately, or do they require some level of work to pass. And, if students are passing the tests and the tests do appear non-trivial, the larger question of whether students learn the covered skills comes to the fore.

Do students pass the tests? The answer to this is unequivocally “yes”: pass rates on the pre-calculus entrance gateway are about 93%; on the calculus I differentiation gateway 99%; on the calculus II entrance gateway 99%, and on the calculus II integration gateway 92%. The pass rate on the calculus III entrance gateway (which is identical to the math 116 integration gateway) is about 95%. Thus, students do pass the test.

Are the tests trivial? This does not appear to be the case. The average number of proctored tests taken per student taking the pre-calculus entrance gateway is about 2.7; for the calculus I differentiation gateway the number is 2.0; for the calculus II entrance gateway 1.5 and for the integration gateway, 2.9; and for the calculus III entrance gateway, 2.4. This is in addition to the times that students take the practice version of the tests (in fall 2014, calculus I students took approximately 2.8 practice tests each).

Finally, do students learn the material that is being tested? We have some evidence for this. Testing of the gateways when they were implemented in 1993 indicated a strong correlation between students’ having passed the gateway and having acquired the skills being tested on the gateway test.[6] In 2002 we assessed the on-line tests by giving students short pencil-and-paper pre- and post-tests on the gateway material. The pre-test was administered immediately after the tested skills had been covered in class, and the post-test shortly after the finish of the gateway tests. Comparison of pre- and post-test results concluded that students’ skills improved significantly, and as the single biggest component of the course between the pre- and post-tests was the gateway test, we are fairly confident that this improvement may be attributed to students’ efforts on the gateway test.[5] In addition, calculus I students who had not taken calculus before agreed, on average, with the statement that they learned how to take derivatives as a result of having to complete the gateway.[5] These conclusions are similar to those drawn by another assessment of a test similar to our gateways.[2]

Running our gateway program is a nontrivial task. Our pre-calculus, calculus I and II courses enroll a total of about 3000 students. In the recent past we’ve had class-sections capped at 32 students, resulting in about 20 class-sections of pre-calculus each fall, 55 sections of calculus I, and 25 of calculus II. For each of these sections we create a separate “course” in our WeBWorK system. Calculus III enrolls slightly over 1000 students in the fall, and the linear algebra course in which we have a gateway has about 240 students. Thus, all told, we create well over 100 WeBWorK courses each semester for over 4000 students, and actively manage their rosters thoughout the add/drop period. Because all of these courses already use web homework, course and roster management is, in some sense, outside of the gateway program. We will discuss the web homework more in a subsequent blog post.

To allow students to take the test for practice as often as they like, two versions of each gateway are created: one that does not require proctor authorization, and one that does. The former may be taken by students as many times as they like, from wherever they like—except that it is not available in the gateway testing lab. WeBWorK supports access restrictions for tests based on computers’ IP addresses, and we therefore prevent the practice test from being taken in the gateway lab to avoid the possibility that a student could take an unproctored test there and then expect that s/he has passed the test. The proctored test has identical content and is drawn from the same testbanks, but requires a proctor’s authorization to start the test and again to grade it. This allows proctors to verify a student’s identity when s/he starts the test, and to simply not authorize a test for grading if a student is not following testing protocol. Our proctors are undergraduate student workers employed through our Math Lab. The proctored test has access restrictions that prevent it from being taken except in the gateway lab. Computers in the gateway lab are configured to allow internet connections to only the University’s login and name servers and the Department’s instructional technology server.

There is a third, mostly invisible, version of each test as well. This also requires a proctor’s authorization to take, and is called the “instructor proctored gateway test.” The goal of this test is to allow instructors the leeway to administer the test in their office, or to give a student an extra attempt at the test on a day when the instructor is working with her/him on the gateway skills.

Our first gateways start in the first full week of classes and the last finishes about two-thirds of the way through the semester. This results in about 12,000 visits to our gateway lab (in the fall semester), which seats approximately 30 simultaneous test takers. Thus, scheduling the open and due dates for the different gateway tests is a delicate matter: our rule of thumb is that we can’t have more than about 1500 students taking tests at the same time, and if there are different tests being taken simultaneously, we must ensure that their deadlines are not too close.

Our gateway lab is open the same hours as our Math Lab, *viz.*, 7–10pm Sunday–Thursday, and 11am–4pm Monday–Friday. We aim to have two or three proctors staffing the lab at all times. Occasionally near deadlines (especially the math 116 integration gateway deadline) we open a second computer lab to increase capacity and reduce wait times. Gateway proctors are managed by the Director of the Math Lab, and they are either off-duty or become tutors in the Math Lab when there are no gateways being offered. Proctors undergo a short training session and are provided with procedural information about the tests.

It is worth including in our discussion of the logistics of our gateway tests some note about accessibility. Overall, our experience with our on-line tests is that they are well-equipped to deal with issues of accessibility. In recent fall semesters slightly under 3% of students in pre-calculus, calculus I, II and III have documentation indicating that they need extended time on exams, which we accomplish easily by updating the gateway test time limits for them, and in general the gateway lab is a space in which there are limited distractions. In the past 15 years of on-line gateway testing, we have also provided accomodation for students with needs that range from use of a mouth-controlled pointing device for all computer input to screen magnification of up to 500%.

We do many things in our Introductory Program, and gateway testing is just one piece of a very big system. In some respects, it’s an odd piece of the puzzle—most of what we do revolves around trying to increase student collaboration in and out of the classroom, increasing students’ engagement with the big ideas of the material we cover, and de-emphasizing basic skills and traditional lecture-centered teaching techniques. Gateways, on the other hand, are only concerned with skills. But by having this focus, gateways fundamentally support our ability to teach the courses as we want to: they make these skills expected and intrinsic, and students acquire them largely on their own, with the result that we are able to spend class and instructor time on the concepts and activities that will have an impact on deep student learning.

1. Blake, L. 2016. Personal communication.

2. Fulton, S.R. 2003. Calculus ABCS: A Gateway for Freshman Calculus. *PRIMUS* 13(**4**):361–372.

3. WeBWorK. http://webwork.maa.org. Accessed 11 August 2015.

4. Giordano, F.R. 1992. *Core Mathematics at the USMA*. Department of Mathematical Sciences, USMA. West Point, NY.

5. LaRose, P.G. & R. Megginson. 2003. Implementation and Assessment of On-line Gateway Testing. *PRIMUS* 13(**4**):289–307.

6. Megginson, R. 1994. A Gateway Testing Program at the University of Michigan. In *Preparing for a New Calculus*, A. Solow, ed., pp.85–89. Mathematical Association of America. Washington, D.C.

7. Megginson, R. 2014. Gateway Testing in Mathematics—Past and Present. WeBWorK and Mathematics Support Center Workshop at HKUST 10–11 June 2014. http://www.math.ust.hk/~support/workshop-2.html. Accessed 11 August 2015.

8. Megginson, R. 2015. Personal communication.

9. Speyer, D. 2012. Some thoughts on teaching Michigan calculus. In *the Secret Blogging Seminar* blog,

https://sbseminar.wordpress.com/2012/02/08/some-thoughts-on-teaching-michigan-calculus/. Modified 11 April, 2013. Accessed 17 August, 2015.

10. Lotus 1-2-3. https://en.wikipedia.org/wiki/Lotus_123. Modified 19 June, 2015. Accessed 11 August, 2015.

]]>The room I was teaching in last semester is shown in the picture to the right; the room this semester is like that shown in the right-most picture in the header of this site. What’s the difference? What I’m trying to do in the class hasn’t changed; both semesters I talk(ed) a little and students work(ed) on “games” (worksheets) a lot. The worksheets themselves have been ported almost directly between semesters. The classes are almost exactly the same size, and both consist, or consisted, of good students who work hard. So all of that is basically the same.

The differences are really two: this semester I’m teaching three times a week for 80 minutes, while last semester I was teaching four times a week for 50 minutes; and this semester I’m teaching in a room that is set up for students to work together. Both of these are really significant changes, but here I want to focus on the latter. What difference does the room make?

I think there are two differences. One is perceptual: what do we expect to do in the room? And the other is functional: how well do things we do there work?

What does the room tell us about what happens in a class held there? A room with tables at which students sit and look at each other is telling us—my students and me—that their interaction is important. And that’s key to how we want to do things: research says (e.g., Laursen, et al.) that the pieces of instruction that matter for learning are the degree of students’ engagement and their interaction with each other. I’ve been teaching for a while now, and I like to think that I can get students to engage and interact (even) in a sub-optimal room (e.g., one like that I was in last semester), but if we all walk in and know what’s going to be happening in the room… well, I’ll use the word that is all but banned in our proofs: “clearly,” this is better.

Then we’re in class. How well do things work? At some level this is impossible to say—are my students learning better this semester? I certainly can’t say that’s true (both last and this semester my classes were and are about 25 students, the exams are different, the students are different…). But I think that it’s been easier for students to work together, which is the majority of our time in class. The fact that the room is slightly larger and I can easily get to every table is indisputable. I don’t think students always work with more than one neighbor, but I think they do so more when they’re at these tables than they did when they had to figure out how to wrestle their arm desks about. Clearly, these are good things.

On the last day of class for this term I walked into class armed with both a worksheet and, because it was a review day, a short lecture on how the material we were finishing the course with brought together the various central ideas in the course. I started by asking if students would like the lecture summary, or if we should just go into the worksheet. I didn’t take a scientific poll, but all of the people that said something (that I heard) voted for the worksheet. The small egotistical part of me shrieked in protest—this was a **good** summary!—and a larger part of me thought, well, maybe that’s a sign that things are going the way we want. Would their answer have been same in a different room? This I don’t know, of course. But hopefully those who know about these things will say their response indicates that the classroom was one in which we were making a difference.

I have a feeling that semesters run much the same way. I set up and manage all of the instructional technology applications that we use in our math classes, and these all—web homework, student information forms, databases, etc.—need to be set up and updated (daily) at the beginning of the semester. The result of this is that I spend a lot of time, as I describe it, “putting out fires.” Dealing with what has to be done, then finding that class needs to be taught, and then on return from class disappearing under the pile of new e-mail that came in while I was teaching. It’s a state that makes getting much more done than what must be completed this afternoon (or, if I am lucky, tomorrow) all but impossible, but I have this sense that at some point in the semester the fires should be contained or put out and there will be time to stop and think and maybe get some other things done.

That point of containment always occurs much later than I think it should, of course, and even when (if) it does there is never the expanse of time I expect—the news cycle moves on to other crises, after all. But each semester I still think that in about a month I should suddenly be able to work on Other Projects, only to have the expectation dashed by a frenetic onslaught that just keeps coming. But at the end of the day having all of these things to do, that need to be done, is part of what makes working in this environment so much fun.

Our web homework server is providing on-line homework for on the order of, in the fall semester, 5000 students in over 160 web homework “courses” (classes, or class-sections). (We also support some 45 courses for the University of Michigan Dearborn’s math department). And our courses have multiple schedules for the deadlines for the homework—the calculus I sections that meet MWF have different due dates than those that meet TThF, and so on. Throughout our drop/add period we are seeing hundreds of students adding and dropping different courses and class-sections. Even with some degree of automation, perhaps it is no surprise that the fires start as fast as I can put them out. On the other hand, it’s amazing that it all seems to work as smoothly as it does.

And I think that this is part of what is so much fun about this part of our program. We actually are able to administer 160 homework assignments (or, if you count the different schedules, about 660 assignments) and give over 11,000 proctored skills tests to those 5000 students in the course of the semester! Equally amazingly, this piece is only one of the many moving parts present in these courses. And all of those parts are held together at the center of our instructional program by the students and instructors in those courses, with whom we all get to work as we put out the fires and keep it all running. It’s a lot of fun to get to manage all of the technology, but even more fun to work with all the people involved in using it in pursuit of learning.

And student learning is, of course, the important part. On a personal level, this comes to the surface when I leave the on-line systems behind and walk into the classroom. Peculiarly, it’s also a bigger job, in some respects, to teach those 26 students than it is to manage the on-line homework for the 162 courses and all of their instructors. I have always thought that teaching one class could be a full-time job if one let it be. There are worksheets to write or modify, material to figure out so that what’s difficult is explained, logical progressions through the material to bring into focus. And then we walk into the space where the students are working on all of these things. I feel as if that’s where we all get away from the fire-line. I leave the e-mail disasters behind, students leave their deadlines to queue up at the door with my e-mail and smoldering fires, and we get to play with orthogonal projections. Or inner products, or whatever the topic of the day is. We think about the pictures behind them, stop to marvel at their application, start a problem only to discover that it actually isn’t straightforward. And we work it out. At least until the class-hour ends, at which point we go back out to see what the queues of homework deadlines or crises at the door looks like. We check the containment status of our many fires, and go back to the line.

Until next class.

]]>In August, *The Chronicle* published an article titled “Confuse Students to Help Them Learn.” Among other things, it describes an experiment in which physics students watched one of two different educational videos: one video explained a basic physics concept straightforwardly in a clear and concise manner, while the other featured a confused student trying to wrap his head around the concept and, despite receiving only guided questions from a tutor, eventually got it right. The student viewers found the first video easy to understand, clear, and concise, while they found the second one confusing. But, the article notes, “the students who had watched the more confusing videos learned more. The students who had watched the more-straightforward videos learned less, yet walked away with more confidence in their comprehension.”

Psychologists are also conducting research on confusion and learning. Professors Sidney D’Mello and Arthur Graesser have worked on a number of studies investigating the relationship between confusion and learning. They, along with their collaborators, published an article this year entitled “Confusion Can Be Beneficial for Learning” (preprint here), in which they observe that “confusion is expected to be more the norm than the exception during complex learning tasks. Moreover, on these tasks, confusion is likely to promote learning at deeper levels of comprehension under appropriate conditions.” The article goes on to explore what these conditions are in a controlled experiment.

The role of confusion in the classroom, along with the phrase, “under appropriate conditions,” is something I have been thinking about lately (albeit in a less scientific manner). *The Chronicle* article states a similar sentiment with the section heading “Confusion works, except when it doesn’t.” But before the caveats that not all confusion is helpful, the big idea is that *confusion can be helpful*. Many of my students are conditioned to think that all confusion is bad, so the first thing I need to do in order to use it effectively in the classroom is help students see that confusion can be productive.

I find it helpful to make the following distinction. Confusion comes in two flavors: productive and unproductive. Confusion is productive if you have the skills and resources to work on and sort out your confusion; it is unproductive if you do not have these resources. And because constructing your own knowledge is much more effective than simply receiving knowledge from someone else, productive confusion is a valuable commodity in the learning process.

But just knowing that confusion can be productive is not always helpful in practice. The trouble is, you do not know immediately of what kind a particular confusion is—quite often, even productive confusion *feels* unproductive until you resolve it. Always try to see if you have the tools to address your own confusion! (And if not, try to figure out what tools you might need but don’t have—a major part of an education consists of converting previously unproductive confusion into productive confusion.)

It is much easier to talk about the relationship between confusion and learning than it is to harness its power, though, so here are some thoughts on how productive confusion has shown up recently in my classroom:

I am currently teaching an Inquiry Based Learning class for pre-service elementary school teachers. In fact, many of the course materials are developed to situate students in the “productively confused’’ category. For instance, we recently spent a week studying place value and how strongly it influences the way we think about numbers. We do this by working in *base-five*. What’s more, I change up the digits, so we write numbers with the symbols A, B, C, D, and 0. (This is post number AA on this blog…) *This is all done in the service of creating confusion.* In order to make this confusion productive, I consider how I set the context for our study, what tools and information I give the students to start with, and what questions I ask them to investigate (and in what order).

First, I think context is very important in order for students to recognize confusion as productive. In the class described above, I tell students that we are interested in studying the structures and patterns of the way we write numbers, and the change to base-five is in the service of distinguishing structures and patterns from things that just feel obvious to them in base-ten. Without this explicit objective, some students may lack motivation to apply themselves to the problems, and others may side-step a successful resolution of their confusion by employing simple procedural methods. In effect, I have described what is going to make this exercise “productive.” For the remainder of the class, we relate all of our progress in understanding base-five to the primary goal of understanding the structure of a base system.

Second, I need to tell students enough that they can start working on problems, but not so much that the problems become only applications of what I tell them. This can be a delicate balance (and is also influenced by what problems I pick to put on the worksheet). For our work in base-five, I tell the class what symbols we will use (A, B, C, D, and 0), and how to count in base-five (by listing the numerals A, B, C, D, A0, AA, AB, AC, AD, B0,…). At this point, I assume that students can continue counting (perhaps with the help of their group members), and thus can make some kind of progress on the worksheet. I leave them to discover patterns and structures in their groups.

Finally, I select problems and put them in a particular order. I want students to have success (confidence is very helpful for making confusion productive), but I also want them to learn something substantial. The problems are arranged to be increasingly difficult to solve just by counting (“figure out in base-five how many gummy bears you have in a package,” followed by “then figure out how many gummy bears you and the group next to you have all together”). Eventually, groups will need to utilize the structure of how a number is written in base-five in order to progress (“accurately place 0, AD, CDD, B000 and A0000 on a base-five number line”).

By the time students have (at least partially) resolved their confusion with writing numbers in base-five, they have discovered the use of grouping and ungrouping (by fives) as well as what the phrase “place value” means in a base system. These discoveries become things that they own, and that they will reference in future investigations. My students have encountered productive confusion, and converted it into knowledge and understanding.

]]>However important I find this work to be, a method for quantitatively assessing participation escapes me. How can I possibly assess a student’s participation in both small group work and whole-class discussion in a meaningful way? And how can I do that in a way that doesn’t inherently favor certain personality types (such as those who raise their hand to answer *every* question)? In the end, I usually end up giving all my students full points, except for those who were often tardy or absent. But it is not “butt-in-the-seat” that I am trying to measure, so this is unsatisfying.

To address this, I am trying something new this year. I am drawing students’ attention to the work they do in class and calling such incidents (asking a question, making a suggestion, etc.) a “class contribution.” Then I ask students to submit, on a weekly basis, a response to the following:

Describe your class contribution and explain what made it valuable to the mathematical discussion. What did you learn from the experience? What did your group or classmates learn? Remember that a contribution does not have to be a correct answer or slick solution. Good questions can be incredible contributions and ideas that you share that that may not pan out are often just as valuable, if not more so, than complete answers.

Do I really think I will be able to differentiate student participation in a fine-grained, quantitative, and meaningful way with these responses? Absolutely not. It is still the case, that almost all of the students will still get full points for the participation part of their grade. However, my goal of including a “participation grade” in the first place was to highlight the importance of in-class work. I am doing that by simply asking the students to *tell me* the importance of in-class work.

There is little I can do to measure the effect of this tiny weekly assignment. Will it create a more involved class? Improve student learning? There is no way for me to associate observed changes or improvements to this assignment. But the assignment allows me to see students’ awareness of the effect of their participation on classroom learning and their own learning. As an example, here is one contribution I’ve received so far where a student describes a question she asked during another students’ presentation of constructing a perpendicular intersection on the first day of class:

I asked about the length of the radius the presenter used to draw the circles. I learned that the length had to be greater than that of half the line segment, and I think other people in the class learned this, too. I think the presenter learned that it helps to be more specific.

I can’t say that this student made these observations *because of* the assignment. But it’s awfully reassuring to see this concrete example of a student recognizing the value of her class contribution (while simultaneously reflecting on class content and mathematical communication). And wasn’t this my goal in the first place?

]]>

I think that’s good. I find myself thinking of the MAA‘s Project NExT, a professional development program which I had the good fortune to help run for many years. Project NExT succeeds by getting amazingly good presenters together with about 80 new Fellows for an incredibly intense 2.5 day workshop. It, too, is a bit of a storm. But it works, and works amazingly well, in large part because it not only provides resources but creates community.

We had 45 new graduate students and faculty in our new instructor training program this year, and ran about 20 sessions with a group of about 10 faculty, graduate students and staff. The goal is to get our new instructors to understand how we want to teach here, and why, and to give them all the tools and background that they need to do that well. I want to then ask “does this work,” and to answer “yes!,” but I’m not sure it’s that black-and-white.

To be sure, many of the people who go through our training turn into unbelievably good teachers. Many of the people who move through the Department are or turn into unbelievably good researchers, too. Some are very good ultimate players. Some are exceptional bridge players (I think; I don’t actually play bridge—in any case, comparatively many play bridge and others may evaluate their skill level). In all cases I think a lot of what drives that success is internal to the individual. I said in a talk I gave to Project NExT Fellows in 2013 that “pedagogy is personal,” and I think that’s true. That is, there are many ways to be a successful teacher, the way one person accomplishes that is likely to be different from the way that someone else may, and how good one is depends a lot on a willingness to be aware and work very hard to improve. And that is something that is very hard so see how to build into an instructor training program.

But we can probably build in some structures that facilitate getting there. I hope that the people with whom we work remain or become convinced that it *matters*—to all of us here—if we teach well. And I hope that they will see enough of the evidence from the pedagogical literature that student engagement is essential for learning to see that as something with which they should be concerned. Certainly there are those around us for whom learning happens no matter what the instructor does. Generally speaking there is a case to be made that I was a student who could do that, and this is likely the case for any of us who are pursuing or have earned a Ph.D. But that may simply prove the point that engagement is essential: I think I probably managed to be engaged in most classes, and certainly engaged with the material outside of class. So teaching well has to be concerned with student engagement.

Of course, that’s why we have a lot of discussion about inquiry based instruction and learning in our program. But we also have to believe that it matters, and to believe that we all should work to do it well, and to get to that point I think the storm that blows through training is itself important. We don’t exist in a vacuum, and we don’t teach in a vacuum. (As Groucho Marx didn’t say, inside a vacuum it’s too dark to teach, and it’s probably too loud, too.) I think we have a community of teachers and learners in our Department here, and I think that having an intense space where we are all being deluged by activity and ideas is something that can help build that. This is something that works in Project NExT, and I hope it works here.

]]>Usually, the process of evaluating where to place a transfer student begins with a review of the student’s paperwork – course descriptions, books, syllabi, old exams – and is followed by some pretty straightforward questions about the material. Oftentimes students will present paperwork that makes it completely clear what first or second year math course they should take next. About one in twenty students arrive having taken enormous amounts of what I consider to be graduate level math (e.g., measure theory or functional analysis), and their placement is more challenging (and fun), but still clear.

However, it is also very common for a student to arrive with a clutch of course descriptions/syllabi that are either (1) so vague as to be useless (e.g. “This course enables students to understand calculus deeply by studying differentiation and integration.”) or (2) look as if someone had designed the course by first cutting out the chapter titles from books about linear algebra, differential equations, calculus, and multivariable calculus and then randomly rearranging them into a list – such a course description might start: implicit differentiation, Hilbert Spaces, eigenvalues, mean value theorem, radius of convergence, …

For these students the placement process is more difficult. I usually begin by asking computational questions: What is the derivative of ? What is indefinite integral of ? What is the radius of convergence of ? What are the limits of integration when integrating over the bounded region that has boundaries ? What is the curl of ? A longish math discussion based on the student’s responses to these questions follows, and we eventually hone in on a course for the fall that will be appropriate for the student.

This quizzing on computational matters always bothers me – I don’t believe the value of taking a calculus class lies in learning that the derivative of is , that the indefinite integral of is , or that the power series has radius of convergence . Yes, you should learn how to compute all of these things (and much more) in a year of calculus. However, since WolframAlpha can also do these computations for us already, hopefully the students are learning something more than how to compute.

This year I experimented a bit by expanding my bank of quiz problems to include non-computational problems. Since calculus at Michigan focuses on having students understand the underlying concepts, I decided to use old Michigan Calculus I exams. The results were mixed, and I think I won’t do this again. However, there was one obvious conclusion: students who took math courses for which the course description was a laundry list of disconnected mathematical topics had basically no conceptual understanding of the mathematics they had studied.

This got me to thinking about the mathematical methods courses I was required to take as an undergraduate physics major. Since I had already seen a coherent presentation of most of their content, I found these topic-a-day math methods courses to be shallow, unsatisfying, and easy (the latter was a good thing since I was trying very hard at the time to woo my eventual spouse). However, I think my non-math major classmates probably struggled mightily, learned as much as the tests required, and long ago blocked the courses from their memory. I understand the urge to design a course that is encyclopedic and covers much of the math a scientist working in the field right now might need, but I’m not so sure about the effectiveness of learning mathematics “Matrix” style . I think yesterday’s Dilbert cartoon pretty well sums up my thoughts on this point.

So, what should the aim of an undergraduate math class be? Here is a partial answer. For sure, a good math course should teach students how to do things that WoframAlpha can also do, but it must do more. A good undergraduate course should also engage the students with the material, develop their problem solving skills, and have them grapple with the concepts that underpin the material. And all math classes should help students develop their abilities to (1) think logically and abstractly and (2) express themselves rigorously and concisely.

Maybe, if we do it right, such students will even be equipped to learn the mathematics that explains the science not only of today, but the science they will encounter decades hence. Maybe.

]]>