Is there an alternative to traditional exams?
One alternative that's been gaining momentum in actual classroom use is standards-based grading (SBG): http://www.fwps.org/teaching/sbe/grading-system/
In SBG, the instructor establishes a list of milestones that students are expected to attain throughout the course. The students then can provide evidence of any sort -- within parameters set by the instructor -- that prove they have met the standard. Student evidence of attainment of a standard is marked on a scale usually from 0 to 4 (unacceptable, novice, progressing, acceptable, and mastery -- or something like this) and grades are assigned at the end of the semester based on how many of the milestones have been met at "Acceptable" level or above.
For example, in calculus, one standard might be "Take the derivative of a second-degree polynomial using the limit definition". A student might show that they know this by doing a problem on a standard timed test. But maybe they don't have it down as well as they should, and on the test their work is marked as a 2 out of 4 (progressing; maybe they got the definition right but did some of the resulting algebra wrong). This isn't the end of the story. Later in the course, the student can show evidence again that they've learned what they need to learn -- for example, they can schedule time in office hours to come work a problem or two to show you they've met the standard. Or maybe take a short quiz in class, or work a problem during unstructured group work time in class meetings, or whatever avenue the instructor allows.
The point of SBG is that we want to assess students based on what they know, and give them multiple ways to show that they know it. SBG can therefore be a superset of traditional timed testing -- students that do well in timed situations will do fine in SBG, but students who struggle with timed testing can have multiple chances to get their act together, and as long as they can prove they've mastered the material by the end of the course, that is what matters.
I don't personally practice SBG but I would like to. I'd suggest this blog post by my colleague Jon Hasenbank who is a big SBG proponent: http://profjonh.blogspot.com/2014/02/sbg-mia14.html
Although exams have the flaws you described, there are also many positive aspects you have not discussed:
- Exams are actually an objective way to test performance, in a "hard to copy" environment. Although one may disagree with what is actually being tested, objectivity is usually not questioned
- They are fast. It takes 3-4 hours to test hundreds of students
- They are universally used and most of the students are accustomed to them.
- It is an excellent way to prepare students to perform under stress, a skill which is extremely useful in any work environment. All works / skills have some sort-of-testing (e.g. presentations, interviews). Even getting your driver's license requires testing. So, learning to do well in exams is a crucial real-life skill.
The only complementary (but not good enough for actually completely replacing exams) tool I can think of, is project assignments. Especially in CS, projects are a very effective tool to prepare students for real-work assignments. But they are not a good enough method on their own without some sort of personal written examination. They allow too much collaboration, copying and usually when done in teams, the good students do most of the work while others slack. While a professor might take measures to minimize this, it is impossible to avoid it 100%.
So, I do not think there is a universally better way to test students' performance than written exams. One may complement it with projects, assignments, oral examinations but in most of the cases, completely abandoning it is probably a mistake.
Students are not allowed to access resources, whereas in reality they would
- Not necessarily so. Instructors can allow "cheat sheets," or allow open book exams.
Students get only a few minutes per question, whereas in reality they would get days
- Not necessarily true. My boss often asks me questions and expects me to give prompt answers. I don't always have the luxury of asking for a few days to research something. If we are in a high-stakes meeting with customers from out of town, my organization's effectiveness might well hinge on my ability to ask or answer intelligent questions on the fly.
Students are not allowed to collaborate, whereas in reality they would
- True, but exams are designed to measure individual ability and performance, not someone's ability to contribute within a group, or accomplish some objective as a group.
Students cram for the exam the day before, and forget what they learned after the course is over; students also focus on the exam to the detriment of learning the material and understanding real-world applications
- Perhaps so, but that's what we get when we test on the minutiae and the trivial, as opposed to testing on the synthesis of high-level concepts (more on that later).
It is difficult to claim that the exam score demonstrates mastery of the course material in contexts other than an exam
- Maybe so, but it's not difficult to claim that, as a general rule, if students are given identical exams, students who score above the median probably understand course concepts better than those who scored below the median (with some possible exceptions, due to factors such as test anxiety, and perhaps even a little luck).
In short, you bring up some possible shortcomings with what you call "the traditional exam model," but you can address some of these simply by redefining your parameters. Instead of two closed-book exams that count toward 70% of the student's grade, give three open-book exams that count toward 50% of the student's grade. Make one of them a take-home exam, and you at least put a dent in the "few minutes per question" problem. By reducing the exam percentage from 70% to 50%, you have an extra 20% to play with, so assign a group project worth 20% of the grade, thereby addressing the collaboration problem you mention.
As for cramming and concentrating on the wrong things, that's what students will do if you structure an exam that requires them to commit a lot of petty knowledge to short-term memory. I try very hard to make my exam questions address higher-level concepts, rather than nitnoid facts. I ask them to explain these concepts, often by weighing in on hypothetical debates. (Sometimes these debates aren't even hypothetical; I'll find a online discussion thread where a debate is raging, then paste it into my exam and ask them to chime in). In other words, insofar as I can, I test on what I want them to remember five years from now. If I want them to solve a problem, but don't care if they've memorized a requisite formula (because they'll be able to look it up anyway), then I'll just include the formula in the exam. I often tell my students, "Anythihg you would need to memorize will be put in the exam itself. If it's something I have trouble remembering off the top of my head, I don't expect you to memorize it for the exam."
But in practice, is it possible to improve on these flaws, without introducing major increases in workload for the instructor?
Ah, now, there's the rub. If you followed my suggestions here, look what I've done! There are three exams to grade, not two. These exams don't have a lot of multiple choice questions, and then there's that new group project I mentioned (which needs to be drafted up, assigned, and graded).
It's hard to get something for nothing; most meaningful improvements are going to come with some cost.