The Last Time Trump Paid Taxes

#TheLastTimeTrumpPaidTaxes, Internet Explorer was not yet a thing. Amazon was a wet thing. Craig’s list had groceries. Google was a misspelling.

DVDs were the technology of the future. The Dow had never hit 5,000, and the federal speed limit was still 55. Gabby Douglas had not been born.

When Trump last paid taxes, Cuthullin sat by Tura’s wall; by the tree of the rustling sound. His spear leaned against the rock. His shield lay on the grass by his side.

There were only 17 and a half states in the U.S.

Triangles had not yet evolved a third side, and—I remember like it was yesterday—a Coke cost a nickel down at the Ben Franklin.

When Trump last paid taxes, all who escaped death in battle or by shipwreck had got safely home except Ulysses, and he, though he was longing to return to his wife and country, was detained by the goddess Calypso, who had got him into a large cave and wanted to marry him.

When Trump last paid taxes: President Clinton. Heyo!

When Trump paid taxes, he didn’t even realize he was about to become part of the 47% because Mitt Romney was still working as a boy bootblack alongside the carriage house in Faneuil Hall.

Brangelina was nothing but the seventh-most popular order at the Orange Julius in the mall downtown.

Imagine, if you can, the excitement that was caused by the birth of Paul Bunyan! It took five giant storks, working overtime, to deliver him to his parents. And in celebration, Donald Trump paid his last installment of taxes to the federal government.

When Trump last paid taxes, the earth was without form, and void; and darkness was upon the face of the deep. And the Spirit of God moved upon the face of the waters.

Fossils formed in the deeps that are now wall sconces.

Red was still orangish, and cows yet had gills.

Towers rose that would fall.

Navigation by frustration

My astute and thoughtful colleague Sam Rebelsky has posted a righteously angry essay about the current state of Grinnell’s web presence. I will add one additional thought about the functionality of the site for members of the Grinnell community, who have to try to figure out how to navigate the public and private sides of the College’s web pages, in part by guessing which side of the public/private divide contains any given bit of information.

Each side of the wall has its own organizational conventions and search functions; the intranet cannot search the web and vice versa. We might take issue with the organization of information on either side of the wall, but that can, in theory, be fixed. The deeper problem lies in determining which side to search. There will always be information that seems like it should be on one side but is actually on the other–we’ve seen for many years how much disagreement we have among reasonable people about these decisions–so the current system will always be sending people down the wrong rabbit hole, and the only signal pointing to the correct rabbit hole is frustration. If you think something is public, you can only discover that it is on the intranet by exploring the public side of the site thoroughly and finally giving up. The same thing happens if you make the opposite mistake. Driving people crazy is a necessary and constitutive feature of the current setup. I will call this phenomenon, which I have experienced repeatedly, the Two Rabbit Holes of Unending Woe.

The only way around that problem is to eliminate the ambiguous cases as much as possible. I see two ways to do that.

1) You can put pretty much all the web content into the private intranet, so everybody knows to look there. But this won’t work: everyone knows, for example, that some events need to be advertised to a broader public, just as everyone knows that the Center for Careers, Life, and Service or the English department needs some public presence. Therefore, if you force other parts of those functions onto GrinnellShare, you necessarily create and maintain the Two Rabbit Holes of Unending Woe: anybody who chooses the wrong Rabbit Hole–and they will sometimes choose the wrong Rabbit Hole–has to explore it completely before frustration finally leads to the correct one.

(Sometimes, even the navigation of the correct Rabbit Hole doesn’t work. More than once, I have finally had to give up and contact staff to ask them to guide me to their content. We can’t have web architecture built on a foundation of phone calls.)

2) But there is another way! You can put all of the plausibly public content onto the public website. If the CLS needs a public presence, the only way for it to have a coherent website is to make all of its marketing and communications materials public. If they want to shape their information for different audiences, they can easily do so with the conventional means of pages “for current students,” “for alumni,” etc. If the categories fail for any reason, the user can use the search function as a backup, with a high likelihood of success: this is the Single Rabbit Hole of Completed Tasks and Happy Grinnellians. (In this scheme, an intranet* still has a useful function as a sorting system for internal documents that need controlled permissions. Departments, committees, classes, and ad hoc groups of individuals can use it to share what they need to share in a controlled way.)

We need to stop thinking that throwing more money and labor into approach #1 is going to solve the current, drastic problems of the site’s organization and usability. We have excellent people working hard on each of the two Rabbit Holes. They make their decisions thoughtfully and help people effectively when called upon. Their work will never pay off in a system that requires user frustration as an essential feature, perhaps the essential feature, of navigating between the public and private sites. This is a happy case in which the value of sharing our information with a broader public also produces in a site that is more welcoming and easier to use.


*Note: Edited from “GrinnellShare.” We need some way to share files, not necessarily the Microsoft way.

Declaring my candidacy for President?

I am ready to declare my candidacy for the Presidency, having won the very tiny caucus of the Interrogative Party. Motto: “You’ve got answers? We’ve got questions!” My wobbly platform will consist only of complicated questions that I don’t know the answer to and would enjoy hearing your thoughts about.

First plank: Aside from issues of access, what would happen if we made American public higher education tuition-free?

(I’m not dismissing the importance of access. I care deeply about it and have Very Strong Opinions about how to achieve it. But plenty of people, including other Presidential candidates, are debating that issue already.)

Starting points:

1. At any given time, about a third of American college students are attending two-year public colleges, and among students at four-year schools, nearly half have had some experience at a two-year school. At the two-year schools, the average tuition is about $3,800 per year. Michigan-Ann Arbor’s out-of state tuition is about $43,000 per year, or about $14,000 for in-staters. (These numbers are tuition-only, not comprehensive fees, and do not account for financial aid.) How would zeroing out tuition change the respective roles of two-year and four-year public education in the American system?

2. At flagship public universities, the non-tuition costs are approximately $15,000 per year for residential students. How would the amount and significance of these costs change in a tuition-free system?

3. How would the role of international students in American colleges and universities (both public and private) change if the public ones stopped charging tuition?

4. How would changing the source of operating income from (primarily) tuition to (largely) direct government funding change the roles of politicians and their appointees in the governance of higher education? How would the mode of education itself change?

5. We often hear about European systems in discussions of tuition expenses, but the U.S. has a different kind of system of private higher education. How would the American system of private higher ed affect (and be affected by) the dynamics of a shift to tuition-free public higher ed? What changes in cultural capital and competitive dynamics would result from the shift?

I’d genuinely love to hear your thoughts on any of these points. I’d prefer that nobody mentions any politicians from other parties. Ambivalence, uncertainty, and especially curiosity are very welcome.

Does patience pay off on the job market? Here’s an article that won’t tell you the answer.

Last week, I was in conversations with two groups of people seeking or soon to seek academic jobs. Though located at two different institutions and coming from a wide range of disciplines, the groups shared a new concern added to the usual ones: the new Chronicle of Higher Education Article called “On the Academic Job Market, Does Patience Pay Off?” Many readers seem to share the alarm of (currently) the first comment under the piece: “This is extraordinary information… more evidence of how merciless the academic job market has become. Graduate students need to be aware of these numbers from the moment they start a program.”

Seeing the impact of the article on the job seekers, I read the piece, and I found a problem: it does not answer the question it asks. “Does Patience Pay Off?” is an answerable question, at least at the level of statistical generality, and we can make it more precise by rephrasing it: “If a candidate stays on the job market for multiple years, does the probability of securing a job in any given year go up or down over time?”

The piece in the Chronicle, however, answers another question: of the jobs secured in a given year, how many are given to people at each stage of their job search? The cited statistics reveal that, across many fields, about half of jobs go to applicants who are ABD or in their first year after completing the doctorate, and a strong majority of jobs to to candidates who are ABD or within four years of completion.

The shape of these numbers is entirely explicable by the nature of a competitive market: assuming a constant number of new applicants per year, a much lower number of new jobs per year, and an equal chance for every candidate to get a job, a mature market will award roughly half the jobs to applicants in their first two years on the market and, of course, many more to the group that also includes the next three classes of applicants.

That is, of course a lot of  jobs go to the classes first hitting the market: that’s where the largest numbers of applicants are. Those classes are bigger than the more seasoned ones because some of the latter applicants will have gotten jobs already, and some of them will have dropped out of the market entirely. You can see these effects play out in a simple spreadsheet model that I made. My applicant-bots have the same chance of getting a job every year they apply, and as their market matures, it produces data similar to the Chronicle’s.

So does patience pay off in the academic job market? I’d still love to know.

Good information and bad coverage of college costs

We’ve gotten some rare good news about transparency in college costs: the U.S. Department of Education’s new College Scorecard, though limited in many ways, gives students and their families quick, easy ways to understand some of the realities of college costs normally hidden by simplistic discussions of sticker prices. But we need to understand what the tools do and don’t offer.

Today’s Chronicle of Higher Ed is not helping. Costs are at the center of Beckie Supiano’s “What Actual High Schoolers Think of the New College Scorecard.” The piece notes some of the advantages of the College Scorecard, but its pessimistic ending frets about students having too much information to process, and the final–and memorable–anecdote of a student using the site describes an important moment in learning about college costs:

Jimena [Alvarez, a high school sophomore] searched for the University of Miami, and was immediately presented with its $30,000 average annual cost. Her reaction? “Oh, no, I can’t go there,” she said. “Or maybe I can, but I’ll have to have a lot of student loans.”

The Scorecard provides further detail on what students might pay at each college, including information on typical debt, a breakdown of net price by income band, and a link to the college’s net-price calculator. But Jimena had a strong initial reaction, and it wasn’t clear she ever made it far enough into Miami’s data to realize she could get a more personalized price.

The moral of the story seems to be that poor Jimena Alvarez’s “strong initial reaction” prevented her from finding the important truth of the story: if only she had gone “far enough into Miami’s data” to find her personalized price, she would have gained a subtler and more valuable understanding. The curious omission of what she would have found leaves the reader to think that more information would have reassured her and perhaps maintained her interest in Miami.

But the condescension is unwarranted. In fact, Alvarez understood exactly what the College Scorecard most valuably conveys: Miami is an extremely expensive university. That average cost of $30,394 is almost double the mean, as Alvarez could see clearly on the chart. If she did dig deeper, she would find even more daunting news: the annual cost for families with incomes of $0-30,000 is a staggering $20,783. Florida State’s cost for such families is $11,542. Harvard’s is $3,897. The differences are just as stark in the other income brackets under $100,000.

As limited as the College Scorecard is in some ways, this anecdote presents one of its strengths: the Scorecard emphasizes costs rather than tuition prices, allowing it to convey a much more accurate sense of relative affordability than most conversations of higher ed involve. The victory of the Scorecard, in fact, lies in an absence: Supiano’s article never uses the word “tuition.”

The questions we ask our students (and the ones they answer)

The accreditors are coming around to our campus again soon, so assessment is on the march. We held a two-day writing assessment workshop on campus over the summer, and I participated in scoring essays written by first-year students the previous fall. I came away just as skeptical about the quantitative assessment of college writing as I have always been, but I nonetheless found my self shaken by how much the exercise showed me about the pedagogy of college writing.

Recognizing the limitations of giving everybody the same prompt, detached from any connection to course content, the framers of our assessment project—a group of skilled and thoughtful people—gave the teaching faculty some directions about framing their writing prompts but left room for tailoring them to each class. This approach represented our effort to avoid the Scylla and Charybdis of writing assessment: the distorting artificiality of standard exercises, on the one hand, and, on the other, the inability of standardized questions to capture the kind of context-specific scholarship that we most want our student to practice. I was on my first committee trying to navigate those waters in about 2002; I haven’t yet seen anyone find safe passage.

In this latest assessment exercise, the variation among the faculty-written prompts was dizzying. Some were detailed, to the extent that they sounded like guidance for writing full-length scholarly articles. Some consisted of a single sentence inviting the student to analyze two writers, period. Some asked for summary followed by analysis. Some asked students to respond to passages that we faculty had trouble understanding out of context. My point is not that the prompts were bad but that they were so varied that it would be hard to imagine them producing writing that we could assess with a consistent set of criteria.

The real surprise came from reading the students’ essays. In crucial ways, their writing revealed that the students often had not read the prompts carefully, and they were right not to do so. The prompts asked for different kinds of writing, but the students responded in largely uniform ways. They understood the assessment exercise. Most of them have done similar things throughout their elementary and secondary educations: they knew they were supposed to write a short essay, conventionally structured, with some quoted evidence sprinkled in.

And indeed, that’s exactly what we assessed. With our rubrics and inter-rater reliability training in place, we were almost always able to score the essays in a straightforward way because the students knew to rely on the skills that had been praised and rewarded so often in their educations, no matter what their teachers tried to tell them on a given assignment.

The students’ ability to perform assessment-ready writing humbled me in two ways. First, it reminded me that students have often deduced my expectations when I have not explained everything that they need, even though I tend to explain a lot. The assessment exercise showed me how much we all lean on unstated expectations. Second, a gained a new way of thinking about how difficult I have found it to try new kinds of assignments, even with students who are curious, creative, and ambitious. Now I see such assignments in this light: every time I take a step away from an assignment that boils down to “Write an essay of length X on topic Y,” I remove some of my students’ confidence that they know what implicitly earns rewards in academic writing, even if the explicit requirements are incomplete or difficult to understand.

I still want to push my students and myself to break away from conventional essay assignments. I want them to become capable editors as well as readers, to give presentations that deploy ironic as well as explanatory slides, to work productively as members of creative teams that must evaluate their own work and choose how to share it. As I ask them to learn these skills, however, I will do so with a renewed awareness of how much I am requiring them to leave behind the techniques and assumptions that have gotten them to this college in the first place, and I need a similar sense of humility as I encourage colleagues to try new techniques and assignments. I have been thinking especially about the dynamics of classroom authority, race, gender, sexuality, class, and disability: it is easier for some of us than others to ask students to step away from expectations they know they can meet.

I am just beginning to turn from these thoughts to building a structured sense of how to respond constructively to them. From conversations I have had so far, I suspect that my thinking will draw heavily on the methods of my colleagues in the creative arts, for whom it is nothing new to ask students to express vulnerability, to judge one another’s work constructively, and to work in teams whose members have complementary skills. More to come.

Failing badly, failing well

When I go to conference panels on the digital humanities or public humanities, I find that many presentations begin with a dismissal of the kind of assignment where a student writes a paper merely for the audience of a teacher. In many ways, I share this suspicion of the two-person academic conversation; though it has value as a means of practicing formal writing and receiving a careful response, we can replicate and add to that value in collaborative, public-minded projects.

As a community, however, we may not have fully appreciated another advantage of the traditionally graded paper assignment: it fails well.

Students, of course, encounter all kinds of obstacles, from false starts in their research to personal or medical problems to competing priorities. When I assign a traditional paper, I can respond to these situations with a set of tools that I have learned to handle reasonably well: extensions, incompletes, a B-. Whatever has gone wrong, and however I respond, the problem remains mainly between the student and me.

The more I create assignments based on teamwork, editorial practices, and audiences beyond the classroom, the more I find I create models that fail badly. One student depends on another meeting a deadline; mistakes become public; the boundaries of the semester limit my ability to alter deadlines and other expectations for collaborative groups.

Now, as I encourage colleagues to try new kinds of tools and practices, I feel another layer of responsibility here: I need to be able to help develop pedagogies that both succeed and fail well. How have you worked to make collaborative and digital projects fail better?

Locating faculty offices

Here’s a question I’ve been pondering lately, in the space planning process that commands much of my time and attention these days: should we organize faculty office spaces by department?

In almost every academic building I know of, members of a given department have contiguous offices, or as close to contiguous as possible. I see the benefits of contiguity: a sense of departmental identity and ownership of the space around the offices, easy navigation for students and others looking for a member of a given department, smoothing of department-based logistics such as a student getting signatures from an advisor and chair from the same department.

On the other hand, if we want to encourage collaboration across disciplinary boundaries, departmental contiguity seems, on its face, the worst way to represent and encourage such work. Furthermore, the traditional arrangement reinforces the sense of alienation often felt by faculty members who do not have colleagues in their discipline, perhaps especially at small institutions. Even if we assign such people to departments administratively, arranging offices by department can remind such people daily that they do not have a disciplinary fit: I’m in the sociology building, one might have to say, even though I’m not a sociologist. This year, I have heard high-level people at two colleges saying that if they could assign offices from scratch, they would do so by lottery, letting biologists and poets mix in a literally random arrangement.

In my building, we have happened upon a third way that I like a lot. In a fairly small building of twentysomething faculty offices, we have the faculty serving three majors: English, History, and Gender, Women’s, and Sexuality Studies. Anyone with even a little sense of the campus’s academic geography knows where to find those faculty, but within the building, we are shuffled; any given office can belong to any faculty member, and we even move around once in a while. We thus combine the benefits of geographical identity with those of a mild version of mixing.

In our current space planning process, we are contemplating a new building that will house the faculty of the social studies division and humanists except for those in the fine arts. I wonder whether we might attempt office assignments by cluster, capturing some of the fluidity of interdisciplinarity while retaining a general sense of campus locality. I wonder whether any readers have experiences, good or bad, with office arrangements other than departmental blocks.

Awarding credit for online courses is not optional.

A point came up in a recent meeting that I had not yet considered. I was told that the issue of whether to award transfer credits for online courses came before one of our committees. The registrar told the committee that we’ve probably awarded such credit already, because in most cases there’s no way to tell from a transcript whether a course was given online.

Well, then. Ready or not . . . .