Note: this is a written version of the keynote address I gave at the Education Summit at the 2008 Game Developers Conference. The original presentation was extemporaneous and included evocative (rather than explanatory) slides. This version has been adapted from the presentation and the slides in a manner that will hopefully preserve the ideas fully while maintaining their original context: an address to both developers and academics at a major industry conference. Coverage of the original presentation is also available via Game Career Guide. The reader might also be interested in a few short notes I wrote about this material at the time I originally posted it.
The Game Developers Conference comes early this year. Usually we convene in March, a fact many in this room of educators may appreciate by recalling previous GDC’s that have fallen, tragically, during spring break. But this year the conference comes not a few days after Easter or St. Patrick’s Day, but a few days after Valentines Day.
I was thinking about Valentine’s Day, and consequently about love and marriage, for a few reasons last week. Besides the obvious one, I regretted missing a dear friend’s marriage ceremony because I was traveling for another speaking engagement. It was his second wedding, and I had the privilege of serving as best man in the first.
Now, the problem with asking a philosopher to be your best man is that he might fill the requisite first toast with rumination more than anecdote, as I indeed did. The gist of my thoughts then were these:
The point of marriage is not to make a commitment when we are certain, but to make one precisely when we are not certain, maybe even because we are not certain, and to preserve that uncertainty as a guide when our relationship with another person changes, as it assuredly will.
That doesn’t mean we stick it out no matter what, as my friend learned. But it means we make a commitment to recognize that when we put two unrelated things together, they never become one. The come together but both change, over time, in ways we can’t anticipate up-front.
In love, and in marriage, There is always a risk of failure. Without the risk, in fact, there is never the possibility of success. Love is rich and subtle and complicated and messy, and that’s where we find its beauty.
By contrast there is a four-letter word for the way we marry ideas in academia. Of course, we are too prolix even to make our four-letter words measure so short, so it’s actually a nineteen-letter word, but with the same connotation.
The word I’m referring to, of course, is INTERDISCIPLINARITY.
To try to prove just how overused “interdisciplinarity” really is, I conducted an unscientific survey of the uses of the term “interdisciplinary” at institutions at which I have taught or studied, by counting Google search results for the term on each one’s official website. Here were the results (figures updated 9/2009):
At Georgia Tech, my current institution, 13,200 uses of the term.
At the University of Southern California, 13,300.
Cornell logs a whopping 18,500.
UCLA, not to be outdone, boasts 20,700.
Are there really all these tens of thousands of legitimately interdisciplinary programs and initiatives? I doubt it. Just think about the scholars you meet at conferences and events. Doesn’t everyone seem to be working in an “interdisciplinary program?” Or worse, in typical scholarly fashion, they might distinguish between many sorts of interdisciplinarity through further abstruse intellectualization: crossdisciplinarity, multidisciplinarity, transdisciplinarity.
“Interdisciplinary” is a tired, overused word. It is a word that we use when it becomes too inconvenient to try to understand the complex relationships between fields. It is a rhetorical move more than a tactical one.
In a paper presented at last year’s Modern Language Association meeting, Katherine Hayles called this sort of strategy “weak and flabby.” Weak and flabby because we throw this term around as a self-reward for connecting close cousin disciplines, ones that are so similar to one another that their fusion should merit not ribbon nor medal. Consider as examples the following “interdisciplinary” programs I collected with relative ease while browsing through a number of universities’ marketing materials:
International Studies & Business Information Technology Management American Studies Neuroscience Nursing & Health Care Management
Another example is Comparative Literature, an “interdisciplinary” field in which I happen to hold several degrees. Despite my fond feelings for it, I have to admit that Comp Lit is another field that has long exemplified weak and flabby interdisciplinarity in the humanities. Here, interdisciplinary traditionally meant “French and German.” In a more contemporary course of study, that might change to “Althusser and Lacan,” but the spirit is the same.
Sitting on a number of job search committees has further helped me find signs of weak and flabby interdisciplinarity. One of the hallmarks is the concept of the “intersection.” Look for it in faculty listing bios or cover letters: “I study X at the intersection of Y and Z.” The intersection aphorism is a convenient trick. It is, literally, a place where one could turn but need not, and indeed probably doesn’t. The road continues straight ahead, and most pass through the intersection giving it not even a quick glance for oncoming traffic.
Most of the time, we want the rhetorical heft of another discipline without the trouble of engaging it too directly. Here’s an example: a colleague in communication once told me a story about a robotics project he had embarked on with a group of engineers. They needed an expert in technology and ethics to fill out the requirements of a grant. When it came time for the first meeting, they started planning the roles. One scholar was to do the sensors, one to do the software, and so forth. Then they pointed to my colleague, “And you do the ethics.” If only things could be apportioned so neatly.
Interdisciplinarity is a troubled concept in the general sense, one deserving of considerable rethinking. But for the present audience, questioning the term brings up an interesting problem for videogame scholarship and education. If we consider at a few of the positions we have collectively taken on the matter, some of the troubles of interdisciplinarity in games will become clear.
One way games education has been understood is as an entirely new discipline within the larger ecosystem of existing disciplines. Videogames, we sometimes hear, are not literature nor film nor art, and ought not to be “colonized” by those or other disciplines. Rather, they deserve to be treated on their own. The problem with this perspective is that it encourages isolationism. It makes unreasonably strong claims that prevent videogame scholarship and education from making legitimate connections with existing disciplines.
Another way games education has been understood is as an interdiscipline. Videogames are complex media artifacts that require contributions from art, programming, design, music, and business. They would seem to be a textbook candidate for interdisciplinarity. But this is a problem as well, as all too often such approaches end up siloing each of these abilities into different subfields or domains of expertise, with “teamwork” bringing individual experts together to accomplish common product goals. This is what scholars of scholarship would call “multidisciplinarity,” the aggregation of multiple disciplines without any meaningful integration. It’s the problem exemplified by my friend and the robots.
Another way we sometimes understand games education is as a savior discipline, one poised to swoop in and rescue the game industry from itself. One way this position expresses itself is in the popularity of serious games, educational games, or experimental game design in academic programs. If we generalize, this position argues that videogames have not, and cannot, achieve their full potential without intervention, and the academy will rescue the medium by filling in all the empty spaces unseen or unheeded by industry. The idea that games are broken and need remedying is a hubristic position, one not well brought to bear by our own ability to make institutional progress within our own walls.
In truth, we have a particularly hard relationship to forge in videogame education. There are too many fields of knowledge with too few commonalities. And interdisciplinarity is too cold and intellectual a concept to help make sense of it. This is hard like love, not hard like calculus.
Subjects like videogames are complex because they don’t abide the close cousin, weak and flabby sort of interdisciplinarity, at all. What if instead we thought about the challenges of marrying the different topics of complex subjects like videogame education as something more emotional and less intellectual … something we can structure to last, we hope, but only if we embrace the uncertainty of the coupling and the inherent, irreconcilable differences between the different participants. Something more like … love.
Here’s one way to look at this particular disciplinary pairing. Videogames are a marriage between ideas and computation. In fact, videogames are likely the first major test of such a pairing. Let’s take a look at each party in a little more detail.
The best and most successful games have a strong, deep relationship with ideas.
Take the collective work of Will Wright as an example. The great thing about Will Wright is not that he is a brilliant game designer, although he is that. The really great thing about Will Wright is that he has many interests, pursues those interests, and expresses them in games.
SimCity was inspired in part by Jay Forrester’s Urban Dynamics, a book about the complex, counterintuitive causes and effects of complexity in cities. SimEarth was based on James Lovelock’s Gaia: A New Look at Life On Earth, which suggests that the earth can be thought of as a single organism with interoperating living and nonliving parts. SimAnt was inspired by Edward Wilson and Bert HÃ¶llobler’s Pulitzer-prize winning study of ant colonies, The Ants. The Sims was motivated by a number of related interests, including exposure to Christopher Alexander’s work on architectural design patters (A Pattern Language), Abraham Maslow’s on human motivation and the hierarchy of needs (Motivation and Personality), and Paco Underhill’s on consumer behavior (Why We Buy). And the core concept in Spore derives from the idea that life can reproduce through space colonization, a concept articulated by Lovelock and others as the “Gaia spore.”
Some might argue that Will Wright is an unfair example of an overly well-read game designer. But ideas found other successful games just as well. The secret isn’t in the reading (although it helps). Rather, it’s in the deep, earnest interest in ideas. Consider Bioshock, one of last year’s most successful and critically acclaimed games. Sure, it’s a shooter rather than a simulation, but at it’s heart Bioshock isn’t about shooting. It’s about art deco design, embodied in architecture, interior design, and the decorative arts, and its relationship to progress. And it’s about Randian objectivism. You can’t take these ideas away from the game and have much left worth celebrating. Or take other popular games, like the Guitar Hero and Rock Band series. At their heart, these games along with the previous, more abstract music games created by Harmonix and Tetsuya Mizuguchi, are about music, musical formalism, and musical performance.
Great game designers have always come from outside the games industry, because they always have had to. We need to ask ourselves: do we want to take this away?
These love affairs between games and other disciplines have been successful because the one never tried to fully account for the other, and frankly the videogame part was a matter of convenience or accident as much as it was a deliberate choice for a medium of expression. When we teach videogame development as an isolated medium, or as a melange of disciplines tied by teamwork, we risk absconding with young peopleâ??s existing and developing interest in ideas, even before they know they have them.
Someone once told me that the brain ossifies by graduate school. By then, according to this account, the way we’ve learned to see and understand the world is basically locked in and unchangeable. This may be an overstatement, but it’s surely true that the tools we learn to use have an influence on the way we see problems and solutions. There are natural structures that constrain thinking, like language. There are methodological ones, like qualitative analysis. And there are theoretical ones, like deconstruction. We tend to learn structures like these in school, and we approach problems relative to the tools we learn. It’s the old problem of the man with a hammer.
Traditionally, the liberal arts have cultured a general interest in ideas and learning, providing not just historical and cultural contexts, but more general training in how to think and how to learn. But there is a problem: these modes of thinking, learning, and context do not account for computation.
James Duderstadt, the former president and dean of engineering at the University of Michigan, Ann Arbor recently led the authorship of a report called Engineering for a Changing World (pdf). In the report, he accounts for this mismatch between technical and liberal arts disciplines. Duderstadt suggests the solution is a two-way street, a relationship — one that invovles both “imbedding engineering in the general education requirements of a college graduate for an increasingly technology-driven and dependent society of the century ahead” and “reconfiguring [it] as an academic discipline, similar to other liberal arts disciplines in the sciences, arts, and humanities.” This shift, argues Duderstadt, would allow students “to benefit from the broader educational opportunities … with the goal of preparing them for a lifetime of further learning rather than professional practice.”
One clear application for engineering and the liberal arts is in computation. This means much, much more than making sure everyone can write a little scripting code. Computation is at the heart of the discipline of videogames, and it is impossible to conceive of expressing ideas about urban dynamics or objectivism or anything else without computation. As computation has become more complex, it is tempting to isolate expertise in it and draw on it where necessary — the silo model. But the opposite is true: now more than ever, this is a reason to make computation more, not less bound up with the pursuit of ideas.
This expertise must go beyond programming too, extending into the materiality of specific computer systems and the history of their evolution. How can anyone understand the first thing about videogames if they don’t see how a game like Crowther and Woods’ Colossal Cave Adventure, developed for and originally played by reading and inputting text on a PDP terminal, differs from Robinett’s Adventure for the Atari VCS, which translated all of PDP Adventure‘s textual forms and commands into graphical ones? Experience with many different types of computing systems, from high-level programming environments all the way down to the metal of specific chips helps students see that the evolution of computer machines is not one of natural progress. It helps them see that interfaces like Wii Remotes and joysticks are not magic wands that make interactions smooth and natural, but devices comprised of sensors and chips and casings developed in particular historical circumstances with material affordances and constraints. The quirks and idiosynchracies of particular machines and their relationships to particular software tools are part of what make videogames interesting and unique. This is true both from the perspective of development and that of criticism.
Furthermore, there is an implicit point in the Duderstadt report worth making explicit: a focus on the individual. Educators of videogames and of many other disciplines often harp on the importance of teaching our students to work in teams. But we also must teach them to work alone. Many of the conventions and techniques we take for granted today were conceived of and developed by individuals working alone, people like Will Crowther and Warren Robinett and David Crane and Will Wright. It is, I think, more than coincidence that many of the games that have appeared in the Independent Games Festival at GDC over the past couple years have been conceived of and implemented by very small teams. Consider Everyday Shooter, whose design, art, music, and programming was all done — and in a uniquely integrated way — solely by Jonathan Mak. This is a true marriage of computation and ideas in an individual mind.
One of the reasons we suffer under weak and flabby interdisciplinarity is the structure of academic institutions themselves. By and large, universities share the same structure, one of isolated departments and colleges, each with its own disciplinary preconceptions, all competing for the same limited resources under a shroud of complex institutional politics. In her talk at the MLA, Kate Hayles offered a possible solution, the dissolution of all departments in favor of a structure she called “clusters.” In this new system, students would declare “problems” instead of majors, identifying the mentors with whom they would work on a problem rather than being shoe-horned into the single tracks of a limited numbers of pre-approved courses of study. I suggested a related approach to networked research in the final chapter of my first book, Unit Operations.
This is, of course, how we actually address topics in research. But it’s also not easy to imagine anytime soon, and it would literally require the dissolution of the institution as we currently know it. If programs might best bring together radically different topics of study, as videogames do with ideas and computation, through a model more like love than like science, then the disciplines they contain might be a sort of star-crossed lovers, a pair doomed to trouble, to suffering, even to defeat.
Unlike the most famous of star-crossed lovers, we needn’t take our lives in protest like erratic children. But we might have to maintain this relationship in the shadows, outside of the structures of the institution, tending to it as the fragile, tentative relationship that it might always need to be.
It will not be sufficient to be the controlling spouse or the naive paramour. We will have to humble ourselves in the face of methods and purposes that exceed our expectations. The relationship expressed by videogame education must not be an instrumental one. We must not do this for intellectual fashion or boosted enrollment, but because we are the stewards of many different potential relationships with videogames, all of which preserve a state of uncertainty between the two partners.
As educators in games — or by extension in any subject formed by the love affair between unlikely mates — we are more matchmakers than pedagogues. Our job is not to find the best way to merge disciplines that share little commonality of history and method, but to let the two embrace, snit, settle, grouse, infuriate, storm off, and reconcile. Let’s reject the cold industrialism of interdisciplinarity and embrace the warm humanity of unlikely mates. Indeed, perhaps the right word for the binding of inherently different disciplines is the same as that of inherently different people: love.