Academically Adrift: Limited Learning on College Campuses Paperback – Dec 28 2010
Frequently Bought Together
Customers Who Bought This Item Also Bought
No Kindle device required. Download one of the Free Kindle apps to start reading Kindle books on your smartphone, tablet, and computer.
Getting the download link through email is temporarily not available. Please check back later.
To get the free app, enter your mobile phone number.
“A decade ago the United States led the world in the number of college graduates. Today this is no longer the case. Academically Adrift raises serious questions about the quality of the academic and social experiences of college students. Armed with extensive data and comprehensive analyses, the authors provide a series of compelling solutions for how colleges can reverse the tide and renew their emphases on learning. This first-rate book demonstrates why colleges, like K–12 institutions, now more than ever require major reforms to sustain our democratic society.” (Barbara Schneider, Michigan State University)
“The time, money, and effort that’s required to educate college students helps explain why the findings are so shocking in a new blockbuster book—Academically Adrift: Limited Learning on College Campuses—that argues that many students aren't learning anything.”
(U.S. News & World Report)
About the Author
Richard Arum is professor in the Department of Sociology with a joint appointment in the Steinhardt School of Education at New York University. He is also director of the Education Research Program of the Social Science Research Council and the author of Judging School Discipline: The Crisis of Moral Authority in American Schools. Josipa Roksa is assistant professor of sociology at the University of Virginia.
Most Helpful Customer Reviews on Amazon.com (beta)
The authors' observations about the importance of studious solitude and its increasing scarcity have obvious implications about the evolution of academic life. But I wonder if it is even worse than they describe. For example, the study hours they include in their data may be overly generous. Today, even those who want to learn and sit down to "study" are likely to be immersed in social media and other consumptive diversions. Students have many ways to avoid sinking into the depths of a subject or struggling with well-developed analytical writing, as the authors note. They rarely get honest and helpful criticism aimed at their individual intellectual and ethical development. I fear that the authors' important observations are only the tip of the iceberg. I hope that earnest students will read this book and set their own course.
The author's assessment was made using the respected 'Collegiate Learning Assessment' (CLA) from the Council for Aid to Education. That group adds that "Academically Adrift" confirms their own findings, and that when combined with our 47 million high school dropouts and the fact that 40% of entering college students cannot read, write, or compute at a college-ready level makes our overall education outputs even dimmer - despite world-leading per-pupil expenditure levels.
The main culprit, per Arum and Roksa, is lack of academic rigor. The authors also found that 32% of the students they studied did not take any courses with 40 pages or more of reading/week, and 50% did not take a single course in which they wrote more than 20 pages during the semester. The authors also report that students spend an average of only 12-14 hours/week studying - 50% less than a few decades ago (per Babcock and Marks), and much of that study took place in fashionable but inefficient groups (per the data analysis). Another conclusion from the authors - instructors tend to be more focused on their own research than teaching. Despite this lack of effort, professor Arum also noes that the students studied averaged a 3.2 GPA. The 'good news' is that students reporting high expectations from faculty members did better, and 23% of the variation in CLA performance occurred across institutions.
The authors' findings are also consistent, per the New York Times (1/17/2010), with the National Survey of Student Engagement's previous review of thousands of students at almost six hundred colleges. That survey found that 12% of first-year students did essentially no quantitative reasoning activity in their coursework, and 51% of seniors had not written a paper during their final year that was at least 20 pages long - even at the top 10% of schools in the study. Similarly, The American Council of Trustees and Alumni study of more than 700 top educational institutions found that students can graduate with ever having exposure to composition, American history, or economics ("The Washington Post, 1/19/2011), while the National Assessment of Adult Literacy found the percentage of college graduates proficient in prose literacy decline from 40% to 31% in the past decade.
The authors found that students in traditional liberal-arts fields improved more on the CLA, education, business and social-work students didn't do so well. Business students not doing well is understandable, given the nonsensensical training they receive on free trade and illegal immigration, as well as logic derived from previously different levels of competition; education students receive even more fact-defying nonsense on the 'benefits' of class size reductions, extra years of teacher experience and training, and the general usefulness of certifications and added spending.
Authors Arum and Roksa recommend increased measurement of student learning, increased faculty expectations from their pupils, improved K-12 performance, and less emphasis on group study. They conclude with a question: "How much are students actually learning in higher education?" Their answer - "for many, not much." They may graduate (57%), but they're failing to develop higher-order cognitive skills - exactly the skills that educators use to excuse our dismal comparative performance on international assessments of K-12 learning.
Bottom-Line: "Academically Adrift's" findings are also consistent with studies of K-12 international achievement that found we're out-worked by our competitors. Why then do so many Asians come to American colleges: weekend observations at nearby Arizona State University indicate they're much more internally motivated, evidenced by my repeated observations that almost all the students in the library then are Asians, even though their overall enrollment is relatively small. American students must similarly become much more motivated. Meanwhile, Kevin Care, policy director of independent think tank Education Sector summarizes the situation well - colleges can no longer say "Trust Us" in response to questions about how much their students learn ("The Chronicle of Higher Education," 1/18/2011).
It's important to recognize that Arum and Roksa stack the deck, by defining "learning" in ways that advantage students in the humanities and social sciences. Engineering schools train students in a different way of thinking rigorously and solving problems. Pre-med programs cram student brains with facts. Both evaluate their students with difficult exams leading to professional certification. It's fair to say that neither Arum nor Roksa could pass those professional exams, which demonstrates that these students have learned a lot along the way.
I'd like to see a subsequent study that gets at the problem-solving skills that we expect of engineers or medical diagnosticians, and see how well economists or English majors solve similar problems in their own area of substantive knowledge. Certainly good grad students in the various fields I know can solve problems effectively, but undergrads are a mixed bag.
Setting that aside, there is a lot of provocative material here. When we look at their preferred measure, "critical learning," students in math, science, social sciences and the humanities do make progress. Business students, or those in social work and (ahem) education - - not so much. Students who read and write learn critical thinking skills, apparently no matter what they read or write about.
Whatever one thinks of their specific findings, the book is valuable for a high-level view of the American undergraduate experience. They summarize many findings from the literature, often providing a disturbing view. Though they don't phrase it this way, I was left with the impression that too many undergraduates view college as a sabbatical between the rigors of high school and the rigors of full-time employment. During this time they socialize with friends, network, and borrow money to finance consumption (including but not limited to drinking). Academic work is a ticket to enable the sabbatical but they may not assign it any intrinsic value or view it as a necessary step toward their future.
Fortunately, Arun and Roksa's findings also show that institutions matter. Colleges and universities can provide academically-oriented environments in which students are motivated to learn, and motivate one another to learn. Some schools already do this. I hope this study sparks more to do so.
Also because they surveyed students at only 20 schools, they are unable to say whether certain schools do a better job than others at fostering gains. For instance, do liberal arts colleges (there are two) do better? Do some schools perform better?
And they criticize surveys like the National Survey of Student Engagement because, as they ask, can we really depend on a self-report surveys to accurately measure learning, because respondents' memories are fallible. (Of course, that's true.) But they themselves depend on self-report surveys by students to describe which student-reported activities lead to more learning.
The book is worth buying, but one has to wonder whether their methods and findings warrant an entire book. Then again, it might take the publication of a book to engender the huge media coverage the study has received. And maybe that's a good thing. Higher education needs to pay more attention to teaching and learning, and this book brings that issue to public attention.
According to the authors, each "interested" party is actually focused on something quite different: Students want an enjoyable social experience and the diploma. Parents want a safe environment for their children and a prestigious credential. Faculty members consider teaching a secondary persuit. Administrators are more keen on recruitment and retention, and the government wants scientific research. Put this together in the market model, and higher education then caters to customers. Campuses are safe, classes are fun, and nothing stands between the student and their diploma.
That is fine, but this book asks the more important question: "are students learning?" Their short answer is no (45%), not much (55%). Five percent learn much.
Consider these sobering observations: Faculty spend an average of 11 hours per week in course preparation and delivery; teaching skills are gained mainly by doing; if they require less and entertain more, they better end-term student reviews -and these may be the only assessment their teaching gets, in matters of retention, tenure, and promotion. Students spend an average of just 12 hours a week studying for a full time course load. Result: after two years of college 45% of students had not measurably improved in critital thinking, complex reasoning or writing The rest: "barely noticable." Meanwhile, new since the 1990s, "a lot of other countries ... are now educating more of their citizens to a more advanced level than we are."
The authors surveyed a couple thousand students in a couple dozen schools using a lesser known and arguably insufficient standardized test known as the College Learning Assessment test. My opinion is that it should have been written up shorter and sent to a juried journal to be vetted by peers and added to other work on this topic -- not sent to
Richard Arum is a professor at New York University, and Josipa Roksa is Assistant Professor in the Department of Sociology at the University of Virginia. They were irresponsible by overstating their findings, drawing sweeping conclusions from one nice but flawed study, and rushing to the publisher. And the University of Chicago Press was irresponsible printing it as something more weighty than it is. But they did, and did well for themselves I'd bet.
So I'll comment on the journal article it should have been. The broadest competencies on employer's wish lists and on university mission statements include critical thinking, complex/analytical reasoning, and writing. The authors claim that these are measured best, for aggregated scores, by the 90-minute Collegiate Learning Assessment (CLA) which uses a performance task and two analytical writing tasks. The authors used 2,322 sets of longitudinal scores (2005 and 2007) from 24 four-year institutions, and implemented a parallel survey. Then they exhaustively analyzed the results. It is likely to catch the eye of those in academic administration, but others will surely find it painfully tedious. In the back they even share the results tables, the instrument, and etc. (I suspect, to make it thicker).
You'll have to read it to appreciate the subtleties of the results, but let's say they perform multivariate regression on the dependent variable (change of CLA from 2005-07, i.e., learning) using the following independent variables, and I may have missed some:
STUDENT DATA (survey and documents)
Race/ethnicity and gender
Parent's education level
Language at home
Number of AP classes taken
GPA and SAT/ACT scores
Number of siblings
Field of study
Courses enrolled in (college transcript information)
Approachability of faculty
Faculty - student interaction outside of classroom
Expectations/standards of peers
Peer efforts toward learning
Peer helpfulness in one's own learning
Hours spent studying alone
Hours spent studying with peers
Reading requirements of courses taken
Writing requirements of courses taken
Dorm or off campus living
Hours spent with fraternities/sororities
Hours spent working on, and off campus
Loans, grants, scholarships
HIGH SCHOOL CHARACTERISTICS
Region and urbanicity
Minority dominitated (70%) high school
Highly selective, selective, less selective (based on avg. SATs)
All these data were placed against the dependent variable: change in CLA scores between 2005 and 2007. In other words the authors asked: "what, or what in combination, explains/predicts learning?" Some factors seemed to help learning (e.g., initial academic preparation leads to faster learning, as does hours of studying alone, working on campus up to 10 hours, grants and scholarships, faculty's high expectations, and studying social science/humanities/science/math). Some things impede learning (e.g., time spent studying with peers, time spend in fraternities/sororities or working off campus, choice of business/education/social work). Some, surprisingly, don't seem to matter (faculty interaction out of class, peer expectations/effort/helpfulness). There are many observations like these. I'll mention that when they threw all the data in the model, still 58 percent of learning was not accounted for; they captured less than half the variation in learning with their large and elaborate net. This alone suggests caution when interpreting the results.
Their methods seem sound and their analyses are thorough. It's mostly well cited and includes a long appendix on method (the last chapter ends on p 144/258). This could have been published in an academic journal, and on the first read I thought its social importance and the chance of a wider readership justified sprucing it up with a nice cover and integrated literature review. After hearing some criticism from my peers, I reconsidered. It's good, but overreaching. If you're an academic administrator, please read it anyway.