The Effect(iveness) of Change

Over the course of last summer we revised our standard “sophomore-level” differential equations course, taken by most engineering students. This is the course which traditionally has been a recipe course in which one learns to categorize all of the different types of differential equations which can be solved by hand while glossing over the fact that most differential equations can’t. The revision was to change it to be a more conceptual and more relevant course; in the following we look at some of the history of the course, the goals of the revision, and the outcome of our efforts.

History

Our version of this course in the past appears to have been rooted in the traditional “recipe-driven” course, with evolutionary changes moving in the direction of a more conceptual, more demanding course. Resource constraints require that it be taught in a lecture of on the order of 100 students. The lecture meets three times a week for a standard 50 minute period. For very many years—since at least the mid-90s, and possibly earlier—there has been an extra hour a week associated with the course which has been in some weeks a computer lab and in others a standard recitation. The students in this course are 75–85% from the College of Engineering; the Department offers other courses in differential equations for other clientele (e.g., a course that assumes that students have had a course in linear algebra and introduction to proof writing).

[image of classroom style computer lab]
Computer Lab, pre-revision

The original computer labs for the course were developed in the early to mid-90s and centered around Euler’s method and some applications; they underwent a major redesign in the late 1990s.[1] Those were the basis for the labs that were in use for the ensuing 10–15 years—let it not be said that our work does not have an ongoing impact! Between then and 2016, the labs evolved slowly, and largely in the direction of requiring less student knowledge of Matlab (the package used for the labs, as all engineering students at the University learn and use it) and, arguably, less conceptual engagement. Students had five sessions they spent working on the (5) labs, and the remaining weeks met in recitation, in which expected recitation type activities (question answering, material clarification) took place.

Common wisdom is that the course as it was before revision was perceived by students as easy and, as it was straightforward, a “good course.” An unscientific survey of student comments on ratemyprofessor.com[2] bears this out, finding it characterized as “the easiest of the required engineering math classes.”[3] Feedback on the labs in teaching evaluations suggested that students saw them as little more than hoops to jump through that didn’t connect to the course as a whole. Recitations, not surprisingly, were regarded as much more helpful. One might suppose that it’s difficult not to like a space where one is told the answers one seeks.

Goals and Revision

At the end of the day, there is an increasing and persuasive body of evidence that indicates that if we are going to make a (mathematics) course effective, in the sense of engendering student learning, we must get students actively engaged with the material.[4,5,6] It is clear from the preceding discussion that the course was not necessarily doing a good job of this before its revision. Active engagement is difficult when the primary instruction takes place in a large lecture, and even the labs and recitations were not well structured to ensure active learning.

Continue reading

Some linear algebra application ideas

Last Fall, I took over our “Applied Linear Algebra” course. This course is targeted at an audience majoring primarily in either electrical engineering and computer science or else in industrial and operations engineering. I’m not an applied mathematician myself, but I really wanted to get nontrivial applications of Linear Algebra into the curriculum.

We did this in two ways — first, with large multi-week group projects, and second with classroom demos using real data and MATLAB or Mathematica scripts. For the first projects, we chose the applications and walked them through the computation. For the second, we challenged them to go out into the world and find us a real application, which they presented in a packed poster fair on the last day of class.

Afterwards, some other instructors told me they were surprised at how many examples I found. So I wrote up a quick note with my favorite examples. Here is a cleaned up version of that note. It mixes together in class demos and group projects, as I think most of these could work at either length.

One thing I’d like to do better in future terms is see if I can find a way to have the students do in-class experimentation with some of these ideas. When I brought this material into class, I was using my laptop and premade scripts on the projector, taking questions and ideas for what sort of computations to run, but still fundamentally controlling the presentation. The big challenge is that, although most of the students have used MATLAB, and many are CS majors, most of them are not actually fluent coders who can think of an idea and immediately make a MATLAB script to do it. (The small challenge is getting enough laptops and desk space, but I believe I could solve this one.) Still, it would be great if we could.

One deficiency of this list is that I don’t have a lot of good applications for the first third of the term — solving linear equations, reduced row echelon form, subspaces, bases, dimension. I hope I’ll solve this in future terms.

Continue reading

25 Years: Gateway Testing at Michigan

[image of small group working in class]
Michigan Calculus Classroom, early 90s

Editors’ note: This is the first of a series of blog posts on the state and history of the University of Michigan’s undergraduate mathematics program. Calculus reform came to Michigan in the early 1990s—closing in on 25 years ago. So this is the first installment of a “25 year retrospective.”

“In the new approach, as you know, the important thing is to understand what you’re doing rather than to get the right answer.”
–Tom Lehrer

Gateway Testing

What’s a gateway test? Our definition is that it is a test of basic skills. These may be skills that are prerequisite for success in a course, or may be skills which every student in a course should develop. For a pre-calculus course, we find that basic algebra skills and function manipulation are prerequisite skills; the canonical example of a skill intrinsic to a course is rule-based differentiation in calculus I. Of course, most calculators, Wolfram|Alpha, and many other tools can easily do the calculations that these skills allow students to perform, and we use calculators and other technology in our classes. So why should we test the skills? We do it because we find that students without prerequisite skills struggle in a course (even in the presence of technology), and we continue to believe that there really are skills (like differentiation in calculus I) that any student who has successfully completed the course should be able to do.

However, while these skills are intrinsic to or essential for success in the courses, they are not our educational focus. Our courses focus on conceptual understanding; e.g., for calculus I, what a derivative tells us, and how we can use this to solve problems—not how to find the derivative of \sin(\cos(x^2) + 2). This conceptual understanding is fundamental to our courses, and it is therefore the subject of the bulk of class and instructor time, and it is what we evaluate on exams. Gateway exams are the mechanism by which we ensure that students also have or acquire the basic skills we expect of them, and we find that they work with minimal investment of time in-class [9].

[testing lab, showing desks and layout]
Gateway Testing Lab

Our gateway tests are 7 or 10 question tests, administered on-line (we use WeBWorK[3] as our homework and testing platform), which students may take multiple times and which they must complete with almost no errors to pass (we allow at most one error). Students may practice the tests as many times as they like, but to get credit for having passed the test they must take the test in a proctored lab where their identity is verified and where they are not allowed to use outside resources. The tests have a time limit of twenty or thirty minutes, depending on the test. Because the skills being evaluated on the gateway tests are those which we expect every student in the course to have or acquire, completion of the test is not a part of the students’ course averages; instead, any student who doesn’t pass a gateway by the specified deadline (students may take the tests in a 2–3 week window during the semester) has her/his final grade reduced by 1/3 to a full letter grade at the end of the semester.

This small carrot/large stick model plays well with student psychology. In general, students in a course will often assume that they can make up missed material later in the course by doing better on the final (or, not uncommonly, pleading with the instructor after the fact to be allowed to do extra credit). Because it is explicitly not possible to “make up” a gateway, or to get back the grade penalty by doing better on the final, there is a strong incentive for students to complete the gateways successfully.

We currently have five gateway tests and are adding a sixth in fall 2015. The “original” four are (1) an entrance gateway (that is, a test of prerequisite skills) for math 105 (Data, Functions and Graphs; a pre-calculus course), (2) a differentiation gateway in math 115 (calculus I), (3) an entrance gateway on differentiation and (4) an integration gateway in math 116 (calculus II). An entrance gateway on integration was added to math 215 (calculus III) in fall 2005. In fall 2015, math 214 (a linear algebra course taken by some engineers) will add a gateway test on matrix operations and row reduction.

Historical Background

Gateway tests as we use them at the University of Michigan appeared in the world of undergraduate mathematics education sometime before the 1980s. They were sometimes referred to as “barrier tests,” though this characterization appears to paint a half-empty rather than half-full glass. We prefer the term “gateway,” suggesting that they lead to something great. Duke introduced them in the 1980s as part of their calculus reform efforts [1], and the U.S. Military Academy was using them by the early 1990s [4]. The University of Michigan was pilot testing them for pre-calculus by 1992.[6] The tests in use in 1992 were slightly different from the test we now give, with five different tests of five problems each, which students were required to complete perfectly. These were pencil-and-paper tests initially generated by hand, and subsequently created with an “enormous” Lotus 1-2-3 [10] spreadsheet.[7,8] The first administration (or, possibly, two) of each test was done in class.[8] It is worth noting that for the first couple of years failure to pass the gateway test resulted in a failing grade for the course, though instructors were told to ensure that this didn’t happen unless the student would fail the course for other reasons anyway. Also when it was first implemented the deadline for completing the test was the end of the semester, which proved less desirable than fixing an earlier deadline.[8]

[students working in the math tutoring center]
Math Tutoring Lab

The pre-calculus gateway was revised, and a derivative gateway added to calculus I, in 1993 as the Department reformed its calculus sequence to create what was then called “New Wave” calculus. This was subsequently renamed “Michigan Calculus,” a moniker that we continue to use today. Gateways for calculus II were added by 1997. The calculus tests had seven problems and students were allowed at most one error, while the pre-calculus gateway had 12 questions and allowed at most two errors. We believe that the tests were generated using a modification of the original Lotus 1-2-3 spreadsheet, which created TeX files from a fairly expansive testbank which appeared larger than it actually was because the problems were numbered non-sequentially (e.g., 11, 12, 13, 14, 21, 22, 35, 36, etc.; problem numbers were printed on the tests that students took). When gateways were being taken, versions of the tests would be created and made available in the Math Learning Center (a.k.a. Math Lab: our free walk-in math tutoring center). At least some instructors would administer the test once in class, and students could take the test again, once per day, in the Math Lab, until the test came due.

The logistics of managing these tests were formidable. In fall semesters, our pre-calculus course had approximately 600 students enrolled; calculus I, approximately 1600; and calculus II, 800. Students taking gateway tests would come to the Math Lab and get a test from the student manning the desk at the front of the lab. The date and time would be stamped on the test when they received it, and they would complete the test at one of the tables in the Math Lab. When they had completed the test they would hand it in at the front desk, where it would be stamped with the date and time again and put in a box for grading. The student manager in the Math Lab, or the (faculty) Math Lab Director or the Introductory Program Director would grade the exams when they were not otherwise busy, and, in any case, not while a student waited. Grading was done with as close to zero tolerance for errors as possible, with a goal of having as fast a turn-around as possible.

This system worked, but in a somewhat qualified manner. Time constraints on grading had the unfortunate consequence that students would not get their graded exams back immediately, which could result in them taking the test again before finding that a previous test had a passing grade. One could imagine a situation in which a student in a class that met Monday, Tuesday and Thursday could take the test after class on Monday, not get it back on Tuesday—and therefore have taken a test on Tuesday, Wednesday, and Thursday (before class) before discovering in class on Thursday that they had in fact passed the test on Monday! And the Math Lab was frequently overwhelmed by the number of students coming in to take a gateway test and for tutoring, and was therefore neither a good testing environment nor easy to monitor. To (try to) address the grading issues, the tests were converted to multiple-choice, but this was done with a heavy heart and the (fulfilled) expectation that it would reduce the efficacy of the tests. The math 115 and 116 gateways were converted back to short-answer when gateway testing was moved on-line.

[screenshot of a calculus gateway]
WeBWorK Gateway Test

To address the issues noted above, the gateway tests were converted to an on-line format in 2001, moving through a number of different testing systems before settling on WeBWorK in 2005. Pilot testing took place in fall 2000 and winter 2001, allowing investigation of different testing software and of logistical issues with the on-line test. Full implementation of the on-line test occurred in fall 2001. Moving to an on-line test prompted three changes to the tests. First, when the test required students to enter their responses as a mathematical formula, we increased the time limit by 10 minutes, allowing 30 minutes for what had been a 20 minute pencil-and-paper test. This was intended to reduce students’ anxiety and complaints about getting problems wrong because of errors in mathematical typography that they might not have made when writing the problem by hand. Second, we allowed students to take the test twice per day rather than once, but required that they go to the Math Lab to review a failing test with a tutor before returning to take it again. Finally, we created a version of the gateway that allowed students to easily practice the test whenever they wanted.

Since the implementation of the on-line system, we have added additional gateway tests. Calculus III (math 215) added an entrance gateway on integration skills in fall 2005, and in fall 2015 a gateway test on matrix operations will be added to one of our linear algebra courses, math 214.

Assessment

Do these tests work? There are a number of possible measures of this. We might first ask if students pass the tests; if not, it is unlikely that they are accomplishing anything productive. If students pass the tests, we might next ask if they appear to be trivial—do all students pass them immediately, or do they require some level of work to pass. And, if students are passing the tests and the tests do appear non-trivial, the larger question of whether students learn the covered skills comes to the fore.

Do students pass the tests? The answer to this is unequivocally “yes”: pass rates on the pre-calculus entrance gateway are about 93%; on the calculus I differentiation gateway 99%; on the calculus II entrance gateway 99%, and on the calculus II integration gateway 92%. The pass rate on the calculus III entrance gateway (which is identical to the math 116 integration gateway) is about 95%. Thus, students do pass the test.

Are the tests trivial? This does not appear to be the case. The average number of proctored tests taken per student taking the pre-calculus entrance gateway is about 2.7; for the calculus I differentiation gateway the number is 2.0; for the calculus II entrance gateway 1.5 and for the integration gateway, 2.9; and for the calculus III entrance gateway, 2.4. This is in addition to the times that students take the practice version of the tests (in fall 2014, calculus I students took approximately 2.8 practice tests each).

Finally, do students learn the material that is being tested? We have some evidence for this. Testing of the gateways when they were implemented in 1993 indicated a strong correlation between students’ having passed the gateway and having acquired the skills being tested on the gateway test.[6] In 2002 we assessed the on-line tests by giving students short pencil-and-paper pre- and post-tests on the gateway material. The pre-test was administered immediately after the tested skills had been covered in class, and the post-test shortly after the finish of the gateway tests. Comparison of pre- and post-test results concluded that students’ skills improved significantly, and as the single biggest component of the course between the pre- and post-tests was the gateway test, we are fairly confident that this improvement may be attributed to students’ efforts on the gateway test.[5] In addition, calculus I students who had not taken calculus before agreed, on average, with the statement that they learned how to take derivatives as a result of having to complete the gateway.[5] These conclusions are similar to those drawn by another assessment of a test similar to our gateways.[2]

Logistics

Running our gateway program is a nontrivial task. Our pre-calculus, calculus I and II courses enroll a total of about 3000 students. In the recent past we’ve had class-sections capped at 32 students, resulting in about 20 class-sections of pre-calculus each fall, 55 sections of calculus I, and 25 of calculus II. For each of these sections we create a separate “course” in our WeBWorK system. Calculus III enrolls slightly over 1000 students in the fall, and the linear algebra course in which we have a gateway has about 240 students. Thus, all told, we create well over 100 WeBWorK courses each semester for over 4000 students, and actively manage their rosters thoughout the add/drop period. Because all of these courses already use web homework, course and roster management is, in some sense, outside of the gateway program. We will discuss the web homework more in a subsequent blog post.

To allow students to take the test for practice as often as they like, two versions of each gateway are created: one that does not require proctor authorization, and one that does. The former may be taken by students as many times as they like, from wherever they like—except that it is not available in the gateway testing lab. WeBWorK supports access restrictions for tests based on computers’ IP addresses, and we therefore prevent the practice test from being taken in the gateway lab to avoid the possibility that a student could take an unproctored test there and then expect that s/he has passed the test. The proctored test has identical content and is drawn from the same testbanks, but requires a proctor’s authorization to start the test and again to grade it. This allows proctors to verify a student’s identity when s/he starts the test, and to simply not authorize a test for grading if a student is not following testing protocol. Our proctors are undergraduate student workers employed through our Math Lab. The proctored test has access restrictions that prevent it from being taken except in the gateway lab. Computers in the gateway lab are configured to allow internet connections to only the University’s login and name servers and the Department’s instructional technology server.

There is a third, mostly invisible, version of each test as well. This also requires a proctor’s authorization to take, and is called the “instructor proctored gateway test.” The goal of this test is to allow instructors the leeway to administer the test in their office, or to give a student an extra attempt at the test on a day when the instructor is working with her/him on the gateway skills.

Our first gateways start in the first full week of classes and the last finishes about two-thirds of the way through the semester. This results in about 12,000 visits to our gateway lab (in the fall semester), which seats approximately 30 simultaneous test takers. Thus, scheduling the open and due dates for the different gateway tests is a delicate matter: our rule of thumb is that we can’t have more than about 1500 students taking tests at the same time, and if there are different tests being taken simultaneously, we must ensure that their deadlines are not too close.

Our gateway lab is open the same hours as our Math Lab, viz., 7–10pm Sunday–Thursday, and 11am–4pm Monday–Friday. We aim to have two or three proctors staffing the lab at all times. Occasionally near deadlines (especially the math 116 integration gateway deadline) we open a second computer lab to increase capacity and reduce wait times. Gateway proctors are managed by the Director of the Math Lab, and they are either off-duty or become tutors in the Math Lab when there are no gateways being offered. Proctors undergo a short training session and are provided with procedural information about the tests.

Accessibility

It is worth including in our discussion of the logistics of our gateway tests some note about accessibility. Overall, our experience with our on-line tests is that they are well-equipped to deal with issues of accessibility. In recent fall semesters slightly under 3% of students in pre-calculus, calculus I, II and III have documentation indicating that they need extended time on exams, which we accomplish easily by updating the gateway test time limits for them, and in general the gateway lab is a space in which there are limited distractions. In the past 15 years of on-line gateway testing, we have also provided accomodation for students with needs that range from use of a mouth-controlled pointing device for all computer input to screen magnification of up to 500%.

Conclusions

We do many things in our Introductory Program, and gateway testing is just one piece of a very big system. In some respects, it’s an odd piece of the puzzle—most of what we do revolves around trying to increase student collaboration in and out of the classroom, increasing students’ engagement with the big ideas of the material we cover, and de-emphasizing basic skills and traditional lecture-centered teaching techniques. Gateways, on the other hand, are only concerned with skills. But by having this focus, gateways fundamentally support our ability to teach the courses as we want to: they make these skills expected and intrinsic, and students acquire them largely on their own, with the result that we are able to spend class and instructor time on the concepts and activities that will have an impact on deep student learning.

References

1. Blake, L. 2016. Personal communication.

2. Fulton, S.R. 2003. Calculus ABCS: A Gateway for Freshman Calculus. PRIMUS 13(4):361–372.

3. WeBWorK. http://webwork.maa.org. Accessed 11 August 2015.

4. Giordano, F.R. 1992. Core Mathematics at the USMA. Department of Mathematical Sciences, USMA. West Point, NY.

5. LaRose, P.G. & R. Megginson. 2003. Implementation and Assessment of On-line Gateway Testing. PRIMUS 13(4):289–307.

6. Megginson, R. 1994. A Gateway Testing Program at the University of Michigan. In Preparing for a New Calculus, A. Solow, ed., pp.85–89. Mathematical Association of America. Washington, D.C.

7. Megginson, R. 2014. Gateway Testing in Mathematics—Past and Present. WeBWorK and Mathematics Support Center Workshop at HKUST 10–11 June 2014. http://www.math.ust.hk/~support/workshop-2.html. Accessed 11 August 2015.

8. Megginson, R. 2015. Personal communication.

9. Speyer, D. 2012. Some thoughts on teaching Michigan calculus. In the Secret Blogging Seminar blog,
https://sbseminar.wordpress.com/2012/02/08/some-thoughts-on-teaching-michigan-calculus/. Modified 11 April, 2013. Accessed 17 August, 2015.

10. Lotus 1-2-3. https://en.wikipedia.org/wiki/Lotus_123. Modified 19 June, 2015. Accessed 11 August, 2015.

A View of a Room

An alternate title for this might be “what a difference a room makes.” Last semester I taught our general linear algebra course for majors (which is a linear algebra and proof course) in a room with arm desks and a seating capacity that was close to the number of students in the section; this semester I’m teaching the same course in a room with tables and space.
217room1
The room I was teaching in last semester is shown in the picture to the right; the room this semester is like that shown in the right-most picture in the header of this site. What’s the difference? What I’m trying to do in the class hasn’t changed; both semesters I talk(ed) a little and students work(ed) on “games” (worksheets) a lot. The worksheets themselves have been ported almost directly between semesters. The classes are almost exactly the same size, and both consist, or consisted, of good students who work hard. So all of that is basically the same.

The differences are really two: this semester I’m teaching three times a week for 80 minutes, while last semester I was teaching four times a week for 50 minutes; and this semester I’m teaching in a room that is set up for students to work together. Both of these are really significant changes, but here I want to focus on the latter. What difference does the room make?

I think there are two differences. One is perceptual: what do we expect to do in the room? And the other is functional: how well do things we do there work?

What does the room tell us about what happens in a class held there? A room with tables at which students sit and look at each other is telling us—my students and me—that their interaction is important. And that’s key to how we want to do things: research says (e.g., Laursen, et al.) that the pieces of instruction that matter for learning are the degree of students’ engagement and their interaction with each other. I’ve been teaching for a while now, and I like to think that I can get students to engage and interact (even) in a sub-optimal room (e.g., one like that I was in last semester), but if we all walk in and know what’s going to be happening in the room… well, I’ll use the word that is all but banned in our proofs: “clearly,” this is better.

Then we’re in class. How well do things work? At some level this is impossible to say—are my students learning better this semester? I certainly can’t say that’s true (both last and this semester my classes were and are about 25 students, the exams are different, the students are different…). But I think that it’s been easier for students to work together, which is the majority of our time in class. The fact that the room is slightly larger and I can easily get to every table is indisputable. I don’t think students always work with more than one neighbor, but I think they do so more when they’re at these tables than they did when they had to figure out how to wrestle their arm desks about. Clearly, these are good things.

On the last day of class for this term I walked into class armed with both a worksheet and, because it was a review day, a short lecture on how the material we were finishing the course with brought together the various central ideas in the course. I started by asking if students would like the lecture summary, or if we should just go into the worksheet. I didn’t take a scientific poll, but all of the people that said something (that I heard) voted for the worksheet. The small egotistical part of me shrieked in protest—this was a good summary!—and a larger part of me thought, well, maybe that’s a sign that things are going the way we want. Would their answer have been same in a different room? This I don’t know, of course. But hopefully those who know about these things will say their response indicates that the classroom was one in which we were making a difference.

35% Contained

When we listen to news of forest fires being fought in California or Montana (or wherever), it seems that there is a running assessment of the containment of the fire. It’s 5% contained, or 35%, and then about the time I am thinking “but that’s only a third,” the fire is resolved and the news moves on to the next crisis.

I have a feeling that semesters run much the same way. I set up and manage all of the instructional technology applications that we use in our math classes, and these all—web homework, student information forms, databases, etc.—need to be set up and updated (daily) at the beginning of the semester. The result of this is that I spend a lot of time, as I describe it, “putting out fires.” Dealing with what has to be done, then finding that class needs to be taught, and then on return from class disappearing under the pile of new e-mail that came in while I was teaching. It’s a state that makes getting much more done than what must be completed this afternoon (or, if I am lucky, tomorrow) all but impossible, but I have this sense that at some point in the semester the fires should be contained or put out and there will be time to stop and think and maybe get some other things done.

That point of containment always occurs much later than I think it should, of course, and even when (if) it does there is never the expanse of time I expect—the news cycle moves on to other crises, after all. But each semester I still think that in about a month I should suddenly be able to work on Other Projects, only to have the expectation dashed by a frenetic onslaught that just keeps coming. But at the end of the day having all of these things to do, that need to be done, is part of what makes working in this environment so much fun.

Our web homework server is providing on-line homework for on the order of, in the fall semester, 5000 students in over 160 web homework “courses” (classes, or class-sections). (We also support some 45 courses for the University of Michigan Dearborn’s math department). And our courses have multiple schedules for the deadlines for the homework—the calculus I sections that meet MWF have different due dates than those that meet TThF, and so on. Throughout our drop/add period we are seeing hundreds of students adding and dropping different courses and class-sections. Even with some degree of automation, perhaps it is no surprise that the fires start as fast as I can put them out. On the other hand, it’s amazing that it all seems to work as smoothly as it does.

And I think that this is part of what is so much fun about this part of our program. We actually are able to administer 160 homework assignments (or, if you count the different schedules, about 660 assignments) and give over 11,000 proctored skills tests to those 5000 students in the course of the semester! Equally amazingly, this piece is only one of the many moving parts present in these courses. And all of those parts are held together at the center of our instructional program by the students and instructors in those courses, with whom we all get to work as we put out the fires and keep it all running. It’s a lot of fun to get to manage all of the technology, but even more fun to work with all the people involved in using it in pursuit of learning.

And student learning is, of course, the important part. On a personal level, this comes to the surface when I leave the on-line systems behind and walk into the classroom. Peculiarly, it’s also a bigger job, in some respects, to teach those 26 students than it is to manage the on-line homework for the 162 courses and all of their instructors. I have always thought that teaching one class could be a full-time job if one let it be. There are worksheets to write or modify, material to figure out so that what’s difficult is explained, logical progressions through the material to bring into focus. And then we walk into the space where the students are working on all of these things. I feel as if that’s where we all get away from the fire-line. I leave the e-mail disasters behind, students leave their deadlines to queue up at the door with my e-mail and smoldering fires, and we get to play with orthogonal projections. Or inner products, or whatever the topic of the day is. We think about the pictures behind them, stop to marvel at their application, start a problem only to discover that it actually isn’t straightforward. And we work it out. At least until the class-hour ends, at which point we go back out to see what the queues of homework deadlines or crises at the door looks like. We check the containment status of our many fires, and go back to the line.

Until next class.

Promoting confusion… and its resolution

Confusion is traditionally an unwelcome guest in the classroom. It is a problem that the conscientious instructor needs to root out with clear explanations, and if students harbor it we expect bad things to follow. Recently, however, its reputation has been changing.

In August, The Chronicle published an article titled “Confuse Students to Help Them Learn.” Among other things, it describes an experiment in which physics students watched one of two different educational videos: one video explained a basic physics concept straightforwardly in a clear and concise manner, while the other featured a confused student trying to wrap his head around the concept and, despite receiving only guided questions from a tutor, eventually got it right. The student viewers found the first video easy to understand, clear, and concise, while they found the second one confusing. But, the article notes, “the students who had watched the more confusing videos learned more. The students who had watched the more-straightforward videos learned less, yet walked away with more confidence in their comprehension.”

Psychologists are also conducting research on confusion and learning. Professors Sidney D’Mello and Arthur Graesser have worked on a number of studies investigating the relationship between confusion and learning. They, along with their collaborators, published an article this year entitled “Confusion Can Be Beneficial for Learning” (preprint here), in which they observe that  “confusion is expected to be more the norm than the exception during complex learning tasks. Moreover, on these tasks, confusion is likely to promote learning at deeper levels of comprehension under appropriate conditions.” The article goes on to explore what these conditions are in a controlled experiment.

The role of confusion in the classroom, along with the phrase, “under appropriate conditions,” is something I have been thinking about lately (albeit in a less scientific manner). The Chronicle article states a similar sentiment with the section heading “Confusion works, except when it doesn’t.” But before the caveats that not all confusion is helpful, the big idea is that confusion can be helpful. Many of my students are conditioned to think that all confusion is bad, so the first thing I need to do in order to use it effectively in the classroom is help students see that confusion can be productive.

I find it helpful to make the following distinction. Confusion comes in two flavors: productive and unproductive. Confusion is productive if you have the skills and resources to work on and sort out your confusion; it is unproductive if you do not have these resources. And because constructing your own knowledge is much more effective than simply receiving knowledge from someone else, productive confusion is a valuable commodity in the learning process.

But just knowing that confusion can be productive is not always helpful in practice. The trouble is, you do not know immediately of what kind a particular confusion is—quite often, even productive confusion feels unproductive until you resolve it. Always try to see if you have the tools to address your own confusion! (And if not, try to figure out what tools you might need but don’t have—a major part of an education consists of converting previously unproductive confusion into productive confusion.)

It is much easier to talk about the relationship between confusion and learning than it is to harness its power, though, so here are some thoughts on how productive confusion has shown up recently in my classroom:

I am currently teaching an Inquiry Based Learning class for pre-service elementary school teachers. In fact, many of the course materials are developed to situate students in the “productively confused’’ category. For instance, we recently spent a week studying place value and how strongly it influences the way we think about numbers. We do this by working in base-five. What’s more, I change up the digits, so we write numbers with the symbols A, B, C, D, and 0. (This is post number AA on this blog…) This is all done in the service of creating confusion. In order to make this confusion productive, I consider how I set the context for our study, what tools and information I give the students to start with, and what questions I ask them to investigate (and in what order).

First, I think context is very important in order for students to recognize confusion as productive. In the class described above, I tell students that we are interested in studying the structures and patterns of the way we write numbers, and the change to base-five is in the service of distinguishing structures and patterns from things that just feel obvious to them in base-ten. Without this explicit objective, some students may lack motivation to apply themselves to the problems, and others may side-step a successful resolution of their confusion by employing simple procedural methods. In effect, I have described what is going to make this exercise “productive.” For the remainder of the class, we relate all of our progress in understanding base-five to the primary goal of understanding the structure of a base system.

Second, I need to tell students enough that they can start working on problems, but not so much that the problems become only applications of what I tell them. This can be a delicate balance (and is also influenced by what problems I pick to put on the worksheet). For our work in base-five, I tell the class what symbols we will use (A, B, C, D, and 0), and how to count in base-five (by listing the numerals A, B, C, D, A0, AA, AB, AC, AD, B0,…). At this point, I assume that students can continue counting (perhaps with the help of their group members), and thus can make some kind of progress on the worksheet. I leave them to discover patterns and structures in their groups.

Finally, I select problems and put them in a particular order. I want students to have success (confidence is very helpful for making confusion productive), but I also want them to learn something substantial. The problems are arranged to be increasingly difficult to solve just by counting (“figure out in base-five how many gummy bears you have in a package,” followed by “then figure out how many gummy bears you and the group next to you have all together”). Eventually, groups will need to utilize the structure of how a number is written in base-five in order to progress (“accurately place 0, AD, CDD, B000 and A0000 on a base-five number line”).

By the time students have (at least partially) resolved their confusion with writing numbers in base-five, they have discovered the use of grouping and ungrouping (by fives) as well as what the phrase “place value” means in a base system. These discoveries become things that they own, and that they will reference in future investigations. My students have encountered productive confusion, and converted it into knowledge and understanding.

Documenting Participation

In most courses I’ve experience as a student or as an instructor, there’s been some nominal part of the grade assigned to “participation.” As an instructor, I include this because I want to emphasize the importance of students’ in-class work, in particular the verbal work of mathematical discourse. The classes I teach happen to be Inquiry-Based Learning classes, but I think this is a broadly applicable to any kind of course.

However important I find this work to be, a method for quantitatively assessing participation escapes me. How can I possibly assess a student’s participation in both small group work and whole-class discussion in a meaningful way? And how can I do that in a way that doesn’t inherently favor certain personality types (such as those who raise their hand to answer every question)? In the end, I usually end up giving all my students full points, except for those who were often tardy or absent. But it is not “butt-in-the-seat” that I am trying to measure, so this is unsatisfying.

To address this, I am trying something new this year. I am drawing students’ attention to the work they do in class and calling such incidents (asking a question, making a suggestion, etc.) a “class contribution.” Then I ask students to submit, on a weekly basis, a response to the following:

Describe your class contribution and explain what made it valuable to the mathematical discussion. What did you learn from the experience? What did your group or classmates learn? Remember that a contribution does not have to be a correct answer or slick solution. Good questions can be incredible contributions and ideas that you share that that may not pan out are often just as valuable, if not more so, than complete answers.

Do I really think I will be able to differentiate student participation in a fine-grained, quantitative, and meaningful way with these responses? Absolutely not. It is still the case, that almost all of the students will still get full points for the participation part of their grade. However, my goal of including a “participation grade” in the first place  was to highlight the importance of in-class work. I am doing that by simply asking the students to tell me the importance of in-class work.

There is little I can do to measure the effect of this tiny weekly assignment. Will it create a more involved class? Improve student learning? There is no way for me to associate observed changes or improvements to this assignment. But the assignment allows me to see students’ awareness of the effect of their participation on classroom learning and their own learning. As an example, here is one contribution I’ve received so far where a student describes a question she asked during another students’ presentation of constructing a perpendicular intersection on the first day of class:

I asked about the length of the radius the presenter used to draw the circles. I learned that the length had to be greater than that of half the line segment, and I think other people in the class learned this, too. I think the presenter learned that it helps to be more specific.

I can’t say that this student made these observations because of the assignment. But it’s awfully reassuring to see this concrete example of a student recognizing the value of her class contribution (while simultaneously reflecting on class content and mathematical communication). And wasn’t this my goal in the first place?

 

The Storm Before the Calm

This year, as I have for a while, I had the pleasure of working with our new instructor training program in the last week of August. And this year, as I have for a while, I found it to be the effort of the semester in condensed form. This week runs at an entirely different pace than the rest of the semester. Not perhaps quite a storm, but still the better part of a week’s worth of solid days starting at 8:30 and finishing between 4 and 5.

I think that’s good. I find myself thinking of the MAA‘s Project NExT, a professional development program which I had the good fortune to help run for many years. Project NExT succeeds by getting amazingly good presenters together with about 80 new Fellows for an incredibly intense 2.5 day workshop. It, too, is a bit of a storm. But it works, and works amazingly well, in large part because it not only provides resources but creates community.

We had 45 new graduate students and faculty in our new instructor training program this year, and ran about 20 sessions with a group of about 10 faculty, graduate students and staff. The goal is to get our new instructors to understand how we want to teach here, and why, and to give them all the tools and background that they need to do that well. I want to then ask “does this work,” and to answer “yes!,” but I’m not sure it’s that black-and-white.

To be sure, many of the people who go through our training turn into unbelievably good teachers. Many of the people who move through the Department are or turn into unbelievably good researchers, too. Some are very good ultimate players. Some are exceptional bridge players (I think; I don’t actually play bridge—in any case, comparatively many play bridge and others may evaluate their skill level). In all cases I think a lot of what drives that success is internal to the individual. I said in a talk I gave to Project NExT Fellows in 2013 that “pedagogy is personal,” and I think that’s true.  That is, there are many ways to be a successful teacher, the way one person accomplishes that is likely to be different from the way that someone else may, and how good one is depends a lot on a willingness to be aware and work very hard to improve. And that is something that is very hard so see how to build into an instructor training program.

But we can probably build in some structures that facilitate getting there. I hope that the people with whom we work remain or become convinced that it matters—to all of us here—if we teach well. And I hope that they will see enough of the evidence from the pedagogical literature that student engagement is essential for learning to see that as something with which they should be concerned. Certainly there are those around us for whom learning happens no matter what the instructor does. Generally speaking there is a case to be made that I was a student who could do that, and this is likely the case for any of us who are pursuing or have earned a Ph.D. But that may simply prove the point that engagement is essential: I think I probably managed to be engaged in most classes, and certainly engaged with the material outside of class. So teaching well has to be concerned with student engagement.

Of course, that’s why we have a lot of discussion about inquiry based instruction and learning in our program. But we also have to believe that it matters, and to believe that we all should work to do it well, and to get to that point I think the storm that blows through training is itself important. We don’t exist in a vacuum, and we don’t teach in a vacuum. (As Groucho Marx didn’t say, inside a vacuum it’s too dark to teach, and it’s probably too loud, too.) I think we have a community of teachers and learners in our Department here, and I think that having an intense space where we are all being deluged by activity and ideas is something that can help build that. This is something that works in Project NExT, and I hope it works here.

Learning Math “Matrix” Style

This past week I participated in the Department’s placement and advising of incoming LSA international transfer students.  Due to visa issues, these students are among the last to participate in summer orientation.  This year there were about 160 international transfer students, most of them from China.  I very much enjoy working with these enthusiastic, bright, jet-lagged students.

Usually, the process of evaluating where to place a transfer student begins with a review of the student’s paperwork – course descriptions, books, syllabi, old exams – and is followed by some pretty straightforward questions about the material.  Oftentimes students will present paperwork that makes it completely clear what first or second year math course they should take next.  About one in twenty students arrive having taken enormous amounts of what I consider to be graduate level math (e.g., measure theory or functional analysis), and their placement is more challenging (and fun), but still clear.

However, it is also very common for a student to arrive with a clutch of course descriptions/syllabi that are either  (1) so vague as to be useless (e.g. “This course enables students to understand calculus deeply by studying differentiation and integration.”)  or (2) look as if someone had designed the course by first cutting out the chapter titles from books about linear algebra, differential equations, calculus, and multivariable calculus and then randomly rearranging them into a list – such  a course description might start: implicit differentiation, Hilbert Spaces, eigenvalues, mean value theorem, radius of convergence, …

For these students the placement process is more difficult.  I usually begin by asking computational questions:  What is  the derivative of  \arctan(x)? What is indefinite integral of \sin(t)?  What is the radius of convergence of \sum_{n=1}^\infty  (\dfrac{x-4}{5})^n ?  What are the limits of integration when integrating over the bounded region that has boundaries x = 0, y =0, x+y+1 = 0?  What is the curl of F(x,y,z) = \langle xy, x \sin(yz), \exp(z^2xy) \rangle? A longish math discussion based on the student’s responses to these questions follows, and we eventually hone in on a course for the fall that will be appropriate for the student.

This quizzing on computational matters always bothers me – I don’t believe the value of taking a calculus class lies in learning that the derivative of  \arctan(x) is \frac{1}{1+x^2}, that the indefinite integral of \sin(t) is -\cos(t) + C,  or  that the power series \sum_{n=1}^\infty  (\frac{x-4}{5})^n has radius of convergence 5.   Yes, you should learn how to compute all of these things (and much more) in a year of calculus.  However, since WolframAlpha can also do these computations for us already, hopefully the students are learning something more than how to compute.

This year I experimented a bit by expanding my bank of quiz problems to include non-computational problems.  Since calculus at Michigan focuses on having students understand the underlying concepts, I decided to use old Michigan Calculus I exams.  The results were mixed, and I think I won’t do this again.  However, there was one obvious conclusion:  students who took math courses for which the course description was a laundry list of disconnected mathematical topics had basically no conceptual understanding of the mathematics they had studied.

This got me to thinking about the mathematical methods courses I was required to take as an undergraduate physics major.    Since I had already seen a coherent presentation of most of their content, I found these topic-a-day math methods courses to be shallow, unsatisfying, and easy (the latter was a good thing since I was trying very hard at the time to woo my eventual spouse).   However, I think my non-math major classmates probably struggled mightily, learned as much as the tests required, and long ago blocked the courses from their memory.  I understand the urge to design a course that is encyclopedic and covers much of the math a scientist working in the field right now might need, but I’m not so sure about the effectiveness of learning mathematics “Matrix” style .   I think yesterday’s Dilbert cartoon pretty well sums up my thoughts on this point.

So, what should the aim of an undergraduate math class be? Here is a partial answer.  For sure, a good math course should teach students how to do things that WoframAlpha can also do, but it must do more.  A good undergraduate course should also engage the students with the material, develop their problem solving skills, and have them grapple with the concepts that underpin the material.  And all math classes should help students develop their abilities to (1) think logically and abstractly and (2) express themselves rigorously and concisely.

Maybe, if we do it right, such students will even be equipped to learn the mathematics that explains the science not only of today, but the science they will encounter decades hence.  Maybe.

 

Creation

In July I teach a class in the Michigan Math and Science Scholars summer camp.  The camp is for advanced high school students who are interested enough to give up 2 weeks of their summer to learn some math and science.  For the last 9 years I’ve taught a class called Math and the Internet, where we focus on some of the mathematical ideas that have made the internet possible.  Jason Howald and I created the class in 2006, and for the last 5 years I’ve been partners with Sunny Fawcett.  Our undergraduate assistant this past summer was Kristen Amman, who set a new standard in parent outreach by blogging and tweeting every day.

One highlight of the class is when we build a machine out of breadboards, logic gates, LEDs, switches, and a lot of wire.  It is 2 days of work (6 hours a day) for the students, and in the end it allows them to post a message on the internet, with both text and pictures.  Basically, to use the machine to send a character, the operator keys in a 6-bit binary code on 6 switches.  Then she hits the clock button 6 times.  That cycles an index register through the numbers 0 through 5, indicating the active switch.  The dereferencer figures out which bit the index register is referencing, and puts the value of that bit at a particular location connected to a particular pin on the serial port of my computer.  Then some software (which the students write) reads that bit and the other 5, and concatenates them to reconstruct the 6-bit code, which then translates to a character.

Each student in the class works on one part of the machine, and then we put it all together at the end of the second day.  The details are interesting and someday we’ll write them up.  Here is a picture from this past summer.

But I wanted to write about a teaching issue I had.  We had one student who was a little iconoclastic, which doesn’t bother me on the face of it, because I am too, sometimes.  But he also felt he was a little “too cool” for the class, and that attitude can sometime spread to others and sap everyone’s morale.

I had that in mind when, after we announced with much fanfare the plan to build the machine, the student in question said, “It doesn’t sound very efficient.  Why not just use a keyboard?”

I didn’t handle this very well.  I was rather annoyed at the student for not appreciating the great adventure we had just laid before him.  I forgot, in that moment, one of the maxims of teaching: it’s not about me, it’s about them.

So I said something a little testy: “Well, it’s not very efficient to use you, either <student’s name>.”  I caught myself, and tried to explain that our goal wasn’t efficiency, it was exploration, etc.  We went on to build the machine successfully, and morale was pretty high, even in the student in question.  So hopefully there was no lasting damage.

Still, it’s been bothering me, and I’ve been mulling over in my head what I should have said.  Here is what I came up with this morning:


Back in the 70s there was this miniseries on the BBC with the rather pompous title of  The Ascent of Man.  (One must forgive the sexist title—it was not meant to be so.) The show was written and narrated by a Polish-English mathematician and polymath named Jacob Bronowski, and it dealt with the anthropological and technological changes which added up (as Bronowski saw it) to civilization as we know it.

There is one line that I remember best.  In the first episode, Bronowski summed up the show by saying:

The greatest force propelling the ascent of man is man’s desire to marvel at his own handiwork.

The truth is, people and technology advance for many different reasons. Sometimes people are motivated by money, sometimes by a desire to do good, sometimes by trying to impress someone else.  Sometimes things happen just by accident.  In the late 1970s James Burke created another British show called Connections, about technological advances, where he showed a lot of different ways things happen.

All that notwithstanding, it’s a “desire to marvel at my own handiwork” that motivates me most of the time.  I like to create something, and then step back and look at it.

A hundred years ago, most people made things, either for their job or at home.  Some portion of them took a lot of pride in what they made.  And occasionally, in the course of making something, they would get an idea how to do it better.  And sometimes that idea was good enough and general enough that it made its way out to everyone else.

Nowadays most people in our world don’t get to make things very often.  And a lot of us buy everything we use, because, honestly, it’s more efficient than making things yourself.  No argument there.

But, if we relegate the making of things to only a small class of people, we as a society take at least three losses:

  1. Fewer people are applying themselves to hard problems,
  2. We only get to use what that small class of people want to make for us, and
  3. Fewer people get to feel the joy of creating something and seeing it work.

So think of this as an opportunity to break the mold of being only a consumer.  If we can get the machine to work (and that’s a big if), it will be our creation, and no one else’s.


That’s probably too long for me to have delivered without boring them.  But it’s what I wish I had said.  I just spent a while looking back over it. 🙂