Tuesday, March 9, 2021

Six Rounds with Wittgenstein (Part 5) ... and also a review of the Valiant megaWAD

watch it—my gf is a drooling supermutant

Today we're going to continue leapfrogging across Wittgenstein's Philosophical Investigations (1953). We're also going to go off on a big tangent about video games that will eventually turn into a review of a Doom mod. Because we can, and that is the spirit and joy of blogging.

547. Negation: 'a mental activity'. Negate something and observe what you are doing.——Do you perhaps inwardly shake your head? And if you do——is this process more deserving of our interest than, say, that of writing a sign of negation in a sentence? Do you now know the essence of negation?

549. "How can the word 'not' negate?"——"That sign 'not' indicates that you are to take what follows negatively." We should like to say: The sign of negation is our occasion for doing something——possibly very complicated. It is as if the negation-sign occasioned our doing something. But what? That is not said. It is as if it only needed to be hinted at; as if we already knew. As if no explanation were needed, for we are in any case already acquainted with the matter.

551."Does the same negation occur in: 'Iron does not melt at a hundred degrees Centigrade' and 'Twice two is not five'?" Is this to be decided by introspection; by trying to see what we are thinking as we utter the two sentences? 

First: negation is a feat of the verbal animal.

Consider the pigeon trained to peck at a button when it's colored red and not to peck at it when it's colored blue—even this description is a reification. Properly speaking, what "not pecking" means is "doing something other than pecking, but we don't specify what it is." The omission is expedient insofar as this is one of many occasions where we are solely concerned with whether X is or isn't Y.

"No I didn't read the email you sent me last night" is a more useful answer than "last night I watched YouTube, ate dinner, clipped my toenails, took five pisses, got drunk, jacked off, licked every doorknob in the house..." and so on until every act is accounted for, leading the patient inquirer to finally concludes that "read your email" didn't occur.

In verbal constructions of logic, which are often concerned with the relational frames in which any given entities or events might be situated, X is not Y is what we're left when we're restricted from specifying any characteristics of X and beyond their identities with regard to each other.

To dredge up the old aphorism "nature abhors a vacuum," nonbeing means nothing outside the context of the verbal animal's repertoire. Skinner writes:

Possibly the [qualifying autoclitic function] most discussed is no. What is the referent of this response (or of its related forms, notnever, and nothing)? In a logical or linguistic analysis, we may perhaps say that the referent of no rain is the absence of rain, but this is clearly impossible in a causal description. If the absence of rain evokes this response, why do we not emit a tremendous flood of responses under the control of the absences of other things?...[Bertrand] Russell thinks that the reason is always verbal. Someone asks Is it raining? and we reply No, it is not raining. "Thus," says Russell, "negative propositions will arise when you are stimulated by a word but not by what usually stimulates the word."

But the stimulus which controls a response to which no or not is added is often nonverbal. The response It is raining then shows generic or metaphorical extension. Or a common accompaniment of rain——say, a threatening sky——may evoke the response as an example of metonymy. The extended nature of the tact is suggested by the commoner alternative response It looks (or feels) LIKE rain...Other responses to which no or not is added may be intraverbal; some irrelevant contiguity of usage has strengthened a response which, if not qualified, would have an inappropriate effect upon the listener. In each instance a response of some strength is emitted, but it is emitted under circumstances in which it is not reinforced as a tact by the verbal community and may even be punished. This additional condition, acting upon the speaker, is the occasion for adding the autoclitic no or not...

In addition to standard forms of response containing not, there are many intraverbal sequences which are responsible for responses in which an autoclitic function is very slight or lacking...In particular, the affixes which serve an autoclitic function tend to become assimilated in standard forms. A sunless sky is a kind of sky, and the response sunless may be as simply determined as cloudy. The response must have originated under circumstances (which doubtless still recur) in which the response sun was emitted and to which the speaker then added the autoclitic -less. Eventually the response is controlled, not by the absence of sun, but by the presence of a gray sky.

On 547: strictly speaking, "essence" is a generalization, hypostatized. (This goes back to the conversations in medieval philosophy about "universals" out of which nominalism developed.) On the face of it, it's easy to speak of the essence of, say, stone. For all the peculiarities of any particular stone on the planet, concentrated clumps of unrefined minerals in a solid state share many attributes in common. Our training in a verbal community conditions us to say "stone," "rock," etc. in response to objects possessing these attributes (or, rather, to the occurrence of these attributes in certain contexts). Though the generalized tact "stone" is appropriate  to a limitless number of occasions involving pebbles, rocks, boulders, crags, cave walls, etc., we confound affairs by presuming that the "essence" the general term implies precedes the generalization instantiated by the verbally behaving organism. The illusion is made evident by a search for "essence" in events and abstractions. Skinner's example, as we've seen, was "poetry." Here are some more cases you can ruminate, if you wish: what is the essence of "dog?" Of "revolution?" Of "male" and "female?" Of "capitalism" and "racism?" Of "nature" and "human?"

On 549 and 551: not to sound like a broken record, but getting to the bottom of what one does when one performs negation necessitates a functional accounting of that performance—introspection will only lead the inquirer in circles. But introspection is far easier and less time-intensive than devising a methodology, setting up experiments, and finding volunteer subjects. By the same token, nonsense generated through introspection is more easily detected than nonsense submitted with the imprimatur of a scientific journal and imbued with the esoteric prestige of technical language.

567. But, after all, the game is supposed to be defined by the rules! So, if a rule of the game prescribes that kings are to be used for drawing lots before a game of chess, then that is an essential part of the game. What objection might one make to this? That one does not see the point of this prescription. Perhaps as one wouldn't see the point either of a rule by which each piece had to be turned round three times before one moved it. If we found this rule in a board-game we should be surprised and should speculate about the purpose of the rule. ("Was this prescription meant to prevent one from moving without due consideration?")

At the risk of making too sweeping a generalization: elements of any social practice (games, language, greetings, table manners, etc.) that can be omitted without consequence are susceptible to elimination—unless they serve some other function in a wider context. In rough behavioral terms: if it is more reinforcing to let a breach go uncorrected than to punish the recusant and insist he observe a rule, then the rule will be shirked on future occasions. If this occurs at a high frequency, over a long period of time, and across a widening segment of the community, the rule will no longer be prescribed.

Think of English words that were once hyphenated in their proper spellings: "to-day," "electro-magnetic," "e-mail," etc. The hyphens gradually fell out of use as "lazy" spellings that simply joined the parts of the compounds went uncorrected because the hyphen had no bearing on the words' legibility.¹

Wittgenstein's first example reminds me of a somewhat obvious parallel in sport: the traditional rituals preceding a sumo match. We can call these inessential to sumo, if by "sumo" we only refer to the rule-governed physical competition that takes place after the referee gives the starting signal. But if we're talking about the institution of sumo, and if we acknowledge that the rituals serve a purpose in the broader cultural practices of which sumo competitions are a part, we must concede that they are not at all extraneous.

Something like what Wittgenstein describes with the example of rotating pieces on a game board lacks ready analogues in the real world. If a rule impedes play and serves no practical purpose in the game or in the social context in which the game occurs (this is really reaching, but imagine a theocratic society in which an interpretation some religious rule dictates all members of both teams in a sporting match must stand and bow their heads in prayer wherever they are when control of the ball is exchanged), that rule will likely be flouted and eventually abandoned outright.

I know I'm taking Wittgenstein too literally by dwelling on games—surely his intention was to draw attention to the remnants of historical contingencies in common social observances—but when I try to think of an example of a purposeless, long-abiding vestige in a game, I keep coming back to scoring systems in video games.

But before we get into that: just for fun, let's return to Wittgenstein's remarks about games and his questions about the essential thing wherein "game-ness" consists, and acknowledge that the extension of "game" into electronic media was metaphorical. What inconsistences resulted from this extension?

When two people play chess or twelve people play basketball, the rules of play are determined and enforced by the participants themselves. No physical impediment restricts somebody from moving a pawn three squares or clutching the ball to his chest and sprinting toward the other team's basket; the contingencies of punishment keep players' behavior within appropriate bounds (i.e., somebody who's been castigated and sent away for refusing to play by the rules is less likely to do it again if participation is important to him). It's possible that obedience to the rules of a game is the result of a context acquiring control: somebody who, for instance, has played many kinds of billiards games and board games won't need to be punished for violating the rules of a card game he's just learned before he'll adhere to them.

In solitaire (provided it is being played with a physical deck of cards), the player upholds the rules herself. If solitaire is the first card game she learned, she would have needed somebody to teach her how to arrange the tableau, show her the rules by example, and watch her as she tries to play, encouraging correct moves and preventing illegal ones. If she is capable of playing the game by herself afterward, what we'd see is a situational repertoire controlled by conditioned reinforcers in the form of card arrangements. If she has experience playing other card games, she is apt to bring to bear her role as a rules-enforcer during multiplayer games upon herself while playing solitaire. Her history already supplies the conditions under which she plays solitaire without cheating.

At any rate, direct social mediation is responsible for what we observe when we watch her playing solitaire. And, if she's in a bad mood after a long night, there's nobody else preventing her from fudging the rules a little—by taking back her last four moves after getting stuck, for instance.

In a video game, the "rules" are the ways in which the software responds in a preprogrammed way to a player's inputs (which are themselves responses to the sounds and images the software displays on a screen).² Extending the category of games to comprise electronic artifacts makes it incumbent upon us, as coherence-seekers, to devise analogies between the rule-governed characters of  ball-/board-/card-/running-/etc. games and video games—but it should be obvious from the outset that we've sent ourselves on a snipe hunt. A basketball player can try to kick the ball into the hoop if he wishes to do so and doesn't care about ticking off the people he's playing with; but under no circumstances can Link attempt to haggle with a shopkeeper or toss a bomb over his shoulder in the original Legend of Zelda. A player cannot perform in-game actions that the software of a particular game doesn't provide for. We could justly say that Legend of Zelda has more in common with a lever-pressing apparatus used in behavioral experiments than with anything Wittgenstein would have recognized as a game.³

Hence, English-speakers refer to video game "mechanics" far more frequently than they mention the "rules" of video games. But this variance suggests it was the determinations of custom that slotted Pong and Pac-Man (and Gone Home and Firewatch, for that matter) for inclusion in the category of "games," not their participation in some essential quality of "game-ness." Had the creators of 1962's Spacewar and 1971's Computer Space referred to their products as, say, "software" or "entertainments," perhaps the controversy and whining surrounding Gone Home ("it's not even a game!") might have been even sillier.

Anyway.

Berzerk (1980)

We were talking about scoring systems in video games as purposeless vestiges in the same sense of Wittgenstein's example of a board-game rule that requires players to rotate a piece before moving it—and granting that we're making an analogy between the socially trained cues and rules that constitute a "game-context" and the engineered mechanics of electronic artifacts, because they're not the same thing.

Early video games, as you know, lacked the progression and variety of their more advanced successors. Once again, let's consider the 1980 arcade game Berzerk. Play consists of guiding your avatar ("the humanoid") through a tenebrous labyrinth full of hostile, trash-talking, trigger-happy robots. The walls edging each screen/room have openings at the four cardinal points (though when you cross from one screen to the next, a barrier prevents backtracking in the direction from which you came). The rooms As you progress farther through the maze, the robots fire off more shots and their "bullets" travel faster. But the game doesn't end. You keep running through the maze and returning fire at the robots until you get shot too many times and are forced to decide whether to plunk another quarter into the slot or walk away mad.

Hence, a points system: an objective metric of how well you perform, and the means by which the arcade-dweller of the early 1980s earned his bragging rights by enshrining his initials in the Hall of Immortals screen. The player's goal in Berzerk isn't to "beat the game" (because he can't), but to get as high a score as possible.⁴

As hardware improved, different kinds of games became possible—ones in which players advanced through a progressively changing in-game environment toward a designated endpoint—but the points-based scoring mechanics carried over from the earlier indefinitely looping games. A device that was once more or less necessary to give coherence to the way people used these artifacts slowly became extraneous as the constitution of the artifacts changed. Retaining the score system wasn't (and isn't) necessarily an act of developmental inertia, particularly when amassing points leads to in-game rewards—say, extra lives. But in many cases, it's clearly a vestige. Sierra's point-and-click adventure games (Space Quest, for instance) scored players as they worked their way through the hazards and puzzles; their superior competitors over at LucasArts saw no reason to arbitrarily add or subtract points based on how players chose to poke and sniff around the contents of Maniac Mansion and The Secret of Monkey Island. The original Mega Man used a scoring mechanic, which was promptly dropped in Mega Man 2 because it didn't do anything (i.e., no in-game rewards were doled out on the basis of points), and players were more interested in the challenge of completing a very difficult game than comparing point tallies with friends. Points were irrelevant to Street Fighter pretty much from the beginning (nobody invested in fighting games gives a damn about a score racked up playing against the CPU), and yet the scoring mechanic didn't go away until 2016's Street Fighter V. It was a stubborn vestige, but it nevertheless went the way towards which all vestiges tend.

The scoring mechanic in 1992's Wolfenstein 3D is an interesting case: racking up points earned the player extra lives, but the "lives" system—also carried over from the design of pay-to-play arcade games—was itself a vestige in a game that allows the player to save her progress at any point in a level and simply load the save file if she loses a life. When the Wolfenstein 3D team went on to develop Doom (effectively, though not officially, a Wolfenstein 3D sequel), the lives system and scoring mechanics were both discarded.⁵

If we wanted to continue the exercise the way Wittgenstein wanted us to, we could shift our focus to other areas in which the passage of time and the vicissitudes of material conditions denude customary practices of their former utility—but, actually, I'd rather talk more about video games. In particular, I'd like to talk about Doom and a fan mod I recently had the pleasure of playing.

Doom II (1994)

The longevity of classic Doom (albeit as a niche game, for the most part) is no more surprising than Super Street Fighter II Turbo remaining a mainstay in the competitive gaming scene almost three decades after its release, or the continued interest in Super Mario Bros. after almost four decades, recently demonstrated by the zeal shown by players of the online game Super Mario Bros. 35 in beating over five million Bowsers in just ten days. If you've never played Doom II—or just haven't found the right occasion to revisit it since 1995—let me assure you: it holds up better than you can imagine. Going into it with a mindset of  "appreciating it in the context of its time" is totally unnecessary.

Considered purely as a first-person shooter, classic Doom is exquisitely balanced: so simple as to exemplify the streamlined essence of the FPS, yet never stultifyingly basic or repetitive.⁶ Its fast-paced, nonstop combat rewards split-second decision making, rapid but controlled responses, and composure under pressure. Despite being easy to learn, it's difficult to master. Figuring how how the weapons work, learning how the monsters behave, and getting a sense of where mapmakers typically like to stage ambushes (the Doom veteran grits his teeth and takes a deep breath when approaching a key on a plinth) only gets you so far. The rest is skill and twitch reflexes.

Classic Doom's fundamentals are so solid that even the games' developers only scratched the surface of their creation's potential in Doom and Doom II. When the first map-editing software began circulating in 1994, thousands of Doom enthusiasts took a break from splattering demons to try their hands at making their own levels. If the output of my friends and me is at all indicative of the average fan-made WAD's quality, most of these levels were garbage. But some people had a real knack for Doom level design, took it seriously enough to dedicate substantial time and effort to it, and submitted their projects to the Doom-playing public via what was then popularly called The Information Superhighway. These amateur mapmakers had played the commercial releases enough times to know what worked and what didn't, and were able to push the envelope even father than id Software. The opinion among connoisseurs seems to be that 1996's Final Doom represents the best commercial release of classic Doom, and each of the two megaWADs it comprises began as fan projects. The versatility of Doom II as a toolkit far exceeds its value as a game (and it's a great game).

Classic Doom's greatest assets are its arsenal and rogues' gallery. I seem to recall going on and on and on in an old writeup on Blood Omen: Legacy of Kain about Silicon Knights' inspired decision to make weapons and armor conditionally useful, as opposed to the "strictly better" equipment upgrades in adventure games like A Link to the Past, Secret of Mana, Super Metroid, etc. Doom took a similar departure from its progenitor, Wolfenstein 3D, whose weapon upgrades consisted of "the better gun" and "the best gun." In Doom, every weapon is potentially the best weapon for an occasion. The BFG 9000 might be the biggest and most powerful gun in the game, but only an amateur fires it off indiscriminately. The rocket launcher dispatches crowds and tanks alike in short order, but using it demands a heightened awareness of your surroundings (or else you'll blow yourself up). The super shotgun is the game's dedicated workhorse, but sometimes the vanilla shotgun is better used to attack at long range, to conserve shells, or when a higher rate of fire is required. And the pistol—well, it's better than the fist, isn't it? Unless you've picked up a berserk pack, that is. And even when you're berserked, there are situations where it might make more sense to employ the pea-shooter from a distance if it's your only option for a ranged attack. But we're belaboring the point.

Concerning the enemies: I'd rather not waste your time and mine with a bullet-list overview of Doom's zombies and fiends and all the ways they've killed us over the years, and I'm not enough of a wonk to say much about the map-design philosophy of enemy placements—though I'd be happy to point you toward somebody who is. It will suffice to say that Doom II's monsters still aren't boring after almost thirty years. They're as brilliantly designed as the arsenal in terms of their flexibility and variety of functions: a couple of chaingunners can either be easily dispatched chumps or viciously lethal depending on the terrain, the way in which they enter the fray, and how they interact with other actors in the area. The risk assessment of a few pinkies or spectres can range from "minor annoyances" to "five-second countdown to getting cornered and mauled to death." Usually, the terrifying cyberdemon is the last monster you want to see—and sometimes he's practically a gift from the mapmaker. Within the machinery of a Doom map, the function of any individual cog rests on its place in the architecture and how it interacts with the surrounding gearwork.

A few weeks ago I finally dove into the world of megaWADs with a playthrough of Valiant, the brainchild of one Paul DeBruyne, who goes by "skillsaw" in the online Doom world. I don't write much about video games these days; the fact that I'm making an occasion of praising Valiant should give some indication of how much I enjoyed it.

Doom II: Valiant (2015)

To start: Valiant looks fantastic. I was always impressed by how SNK managed to keep pushing the limits of the 16-bit Neo Geo hardware (released in 1990) more than ten years after its release—and I'm tempted to say that what amateur mapmakers have managed to do with the rickety old Doom engine is even more impressive. As far as I can tell, skillsaw relied on the submissions of Doom-enthusiastic pixel artists for the game's custom textures and sprites—which shouldn't be read as a criticism, but as an acknowledgement that Doom modding and mapmaking continues to be a collaborative, communal endeavor.⁷ The score, on the other hand, uses tunes composed by members of the Doom scene, as well as tracks "borrowed" from the Super Famicom game Gundam Wing: Endless Duel. (There's also a .midi rendition of "The Final Countdown" used somewhere, but I fear I've already said too much.)

Skillsaw's maps are generally excellent—they look great, they consistently surprise and impress, and hardly give you a moment's peace. But what's most impressive about Valiant are the alterations it makes to classic Doom's fundamentals through tweaks to the source code. Tampering with a formula that's already so close to perfect can be a dicey proposition, but skillsaw's changes are well-considered, implemented with care and deliberation, and undoubtedly change Doom for the better.

To start: the pistol and chaingun are now the "super pistol" and "super chaingun." A pistol with a slightly higher rate of fire doesn't exactly renovate the game, though it does change the dynamic of "pistol start" playthroughs by allowing the player to stand up to stauncher opposition while he searches for a shotgun at the beginning of a map. The modded chaingun's drastically higher rate of fire, on the other hand, is game-changing. Before playing Valiant, I never realized how far the chaingun falls short of virtually every other weapon: it only really excels at dispatching zombiemen and suppressing attacks from the flinchier demon breeds. Valiant's chaingun is not only stronger (and much more fun) than the vanilla model, but its replacing a lackluster weapon in the 4-slot meant that the maps had to be made more hazardous to compensate for the addition of a weapon whose multipurpose efficacy is on par with the super shotgun and plasma rifle.  

Similarly, Valiant shows how much room for improvement id Software left in classic Doom's bestiary—specifically where the bipedal fireball-slingers are concerned.

To start: the imp. His commercial model is slightly more durable than a zombie, and walks around throwing fireballs, one at a time. Valiant's super imp (seeing a theme here yet?) flings two fireballs, one after the other, which travel slightly faster than before. This relatively minor change carries significant consequences: it forces a more prompt response from the player and extends the attack's danger window. I can't even tell you how many times I dodged a super imp's first fireball, only to walk face-first into the second.

Next: the hell knight, who's basically a bigger imp with a higher damage output and more health. Valiant altogether replaces him with the pyro knight: a sprite edit that hurls a barrage of chained projectiles that usually leave you dead after a second direct hit. The attack has a long duration, takes up a lot of space, and travels fast. It isn't easily dodged unless you've got swift reflexes and are paying close attention—and the time you're focused on the pyro knight and his fireballs is time you're not thinking about what the other malefactors locked onto you are doing.

Finally, the baron of hell (just a hell knight with twice the health) has two replacements: one is an aesthetic substitute, and the other is the coded substitute. The formal substitute is the cybruiser: basically a hell knight given a makeover to become a miniature cyberdemon. He has the same amount of health as a hell knight, fires the cyberdemon's one-hit-kill rockets, and is immune to blast damage because he takes the spider mastermind's slot in the source code. (The spider mastermind is still around, taking the place of the vestigial light amplification power up, but without her resistance to rockets she's even less of a menace than usual.)

Within the source code, the baron's slot is taken by the super mancubus—a variation of the flame-spewing fat bastard sprite-edited to resemble his Doom 3 version. He's almost as tanky as a baron, and launches his fireballs as though he ganked the spread shot power-up from Contra. His projectiles are tricky to dodge from a distance as they fan out, and they're instant death if you're standing close enough to soak them all up at once. On the plus side, the super mancubus's value as a useful-idiot infighter is only rivaled by the cyberdemon.

These aren't just improvements; they're fixes. The deficiency of the hell nobles' design was first laid bare in Doom 64, where, without revenants, chaingunners, and arch-viles (and with very few arachnotrons) lurking about to keep you on the move and divide your attention, so many encounters with what were supposed to be high-tier enemies became repetitive exercises in circle-strafing and taking pot shots from behind a corner. Hell nobles are only interesting or fun when faced as a complement to a larger host of enemies, and their replacements in Valiant enhance these group dynamics by demanding more situational awareness on the player's part, and by further diversifying the possibilities of enemy combinations.

Beyond these, skillsaw introduces three more brand-new malefactors. One is the suicide bomber, a screaming, sprinting zombie clutching a stick of dynamite. Let him get too close, and he explodes, gibbing you both. Another is Valiant's custom endboss, the super-vile: a hyperactive arch-vile with a multi-hitting flame attack and almost twice the health of a cyberdemon. But the third of these new enemies—well, I was prepared to call it my favorite, but after getting zapped to death by him a few dozen times, I'm not sure "favorite" is the right word.

The arachnotron is another classic Doom monster that somewhat missed the mark. It has its uses to the mapmaker, but it's rather a redundant item in the toolbox. Its plasma gun has a high rate of fire and the projectiles are among the fastest in the game—but as a long-distance sniper, the chaingunner literally outguns it with his instantaneous hitscan attack. In close quarters, the arachnotron has as much health as a hell knight, is more nimble on its feet, and, in theory, can deal damage more quickly—but since it's fatter and flinchier than a hell knight, it soaks up more shotgun pellets per shot and is more easily jarred out of attacking. There's just not much the arachnotron contributes except for variety.

Valiant fixes the arachnotron by introducing a new enemy called the arachnorb: an arachnotron freed from its mechanical chassis that floats around like a cacodemon and fires the same devastating rapid-fire plasma bolts it did before. It falls to a single blast from the super shotgun, but its mobility as a flier gives it more opportunities to peg you from a distance and to sneak up from behind when you're not paying attention. Naturally, skillsaw likes to deploy them in swarms.

How does the arachnorb improve the arachnotron? Well: through a hack in the source code that sets a fifty percent chance of an arachnorb spawning from a dying arachnotron. Instead of completing its death animation, the arachnotron writhes free from its mechanical frame, floats into the air, and continues the assault. It's not terribly threatening if you're facing just one lonely arachnotron in a corridor, but when you're negotiating multiple threats in the fog of battle, a fifty-fifty chance of a potentially lethal foe rolling a saving throw presents a dangerous complication. (Interestingly, since the arachnotron and the arachnorb count as different monster species, they're capable of infighting and killing each other.)

Is it worth also mentioning that pinkies have been sprite-edited to look sort of like their Doom 3 relatives (no eyes, robot legs), renamed "super demons," and adjusted such that their bite attack hits more efficiently? Evidently so.

The biggest problem with Valiant is that it's the only megaWAD containing these adjustments. I recently began an Eviternity playthrough; picking up the ordinary chaingun and sidestepping ordinary fireballs tossed by ordinary imps and hell knights has been a bit of a drag. It's hard going back to vanilla with a richer flavor lingering on your palate. 

I suppose I have nothing else to add, and the hour is too late for me to wax aesthetic or philosophical for a summing up of things. But I'll say that in my harried old age, where I play video games much less frequently than I used to, I find myself almost exclusively interested in playing ones developed by auteurs and eccentrics. I have enough corporate-engineered cultural artifacts fed to me just by dint of the fact that I'm on the internet. Give me passion projects, please.

Thank you for indulging me tonight. We'll get back to Wittgenstein for real and wrap things up in a few days.


1. The silent letters in words derived from French (think of the e's at the end of "guide" or "love") on the other hand, as pointless as they might seem and as frustrating to the nonnative English speaker as they undoubtedly are, won't go away so easily. Someone fluent and fully literate in English won't sound out the individual letters in reading the written word "guide;" she'll respond to guide as a singular stimulus, not to g u i d e as five stimuli in sequence. The e is not vestigial so long as a reader will pause over guid in a sentence and wonder if it's pronounced "gwid" (and then possibly ask herself if she's familiar with any contexts in which she's heard "gwid" spoken before) or if might otherwise be some archaic or stylistic way of spelling "good."

2. There are, of course, many well-known cases of players bending the rules by exploiting bugs in the programming, but for the time being let's ignore them. We'll also disregard the the social agencies which determine their "legality" (read: legitimacy) during play, though that would be an interesting topic to explore.

3. Skinner has something interesting to say on this point:

Our definition of verbal behavior, incidentally includes the behavior of experimental animals where reinforcements are supplied by an experimenter or by an apparatus designed to establish contingencies which resemble those maintained by the normal listener. The animal and experimenter comprise a small but genuine verbal community.

By Skinner's definition of verbal behavior, then, we might say that a video game designer devises a language in which players become conversant. Though this isn't the place to get into the details, the architects of relational frame theory were motivated precisely by what they perceived as a fatal flaw in this formulation, and they're getting closer to persuading me that their criticisms are not unfounded.

4. In theory, somebody could decide that her goal should be to run through as many different rooms as she can (the grid of rooms is procedurally generated at the game's start, but remain fixed in that configuration during play) before inevitably getting shot down, ignoring points altogether—but that won't get her initials in the Hall of Immortals. Back when arcade games were played in public spaces, this incentivized playing Berzerk the way its designers intended. It would not have been unimaginable for a "metagame" to have emerged within a given arcade or region in which players adopted the number of rooms visited as the performance criterion instead of the number of points acquired—but flaunting custom can be difficult, particularly when the custom is established by the people who literally write the rules of the games.

5. Historical footnote: back in the early 1980s, a programmer named Silas Warner played Berzerk and was inspired to make his own game for the Apple II. What he came up with could be described as Berzerk with stealth elements and item-collection mechanics, and it was called Castle Wolfenstein. A little more than a decade later, Apogee released a spiritual sequel to Castle Wolfenstein that retained and revamped most of the original game's basic elements (mazes, Nazis, treasure caches, ammo management, etc.) packaged with the novel gimmick of a first-person perspective. This, of course, was Wolfenstein 3D. The team that developed Wolfenstein 3D soon went on to create Doom.

So: in the genealogy of video games, there is a direct line of descent from Berzerk to Doom. Having played a lot of Berzerk and a lot of Doom, I can't say this came as a complete shock.

6. Yeah, yeah—and after I made a big deal about the fictiveness of essences. Here we have, once again, the two-vocabularies problem.

7. For a list of credits, you can look at Valiant's page on Doomworld. And, if you'd like, you can also download Valiant for yourself.

No comments:

Post a Comment