Baseball’s 9th Inning

After a frigid winter, baseball’s return is most welcome. In the high summer months, it’s hard to have a bad time at a ballgame. Winning or losing is almost immaterial. There are a 162 games in a season; no contest feels like life or death. At worst, a day at the ballpark is an overlong picnic—you return home a little burnt from the sun, your stomach heavier and your pocket lighter from overpriced junk food, but really no worse off from the experience.

Well-played baseball is a simple aesthetic pleasure: a circus of long arcing throws, and of swift line drives. It is a game of detail, where the little things—hitting the cutoff man, working the count, bunting the runner over—add up. Like a Japanese tea ceremony, it is a meticulous ritual: “Take Me Out to the Ball Game” during the seventh-inning stretch, and an arcane lexicon which would seem silly were it not so deeply embedded in the American mind.

Baseball is a beautiful game. Which makes it such a shame that it is dying.

Let’s dispel all illusions—baseball is aging as fast as the Republican Party. The median age for a viewer of a nationally broadcast baseball game is 54, higher than for football, hockey, and basketball; only 6 percent of people who watched the 2013 World Series were under 18. The result will be a whole generation that grew up not watching baseball.

Baseball apologists point to strong attendance figures, and record team valuations. But national television audiences, surely the strongest indicator of the sport’s overall popularity, are dwindling.

Much of this is a self-inflicted wound. More and more teams are walling themselves behind regional cable networks, which promise lucrative subscription revenues but limit access to all but the most diehard fans.

But I think the cause of this decline is more fundamental. The root of the problem is that baseball remains a nineteenth-century game stuck in the twenty-first century.

Baseball’s Golden Age coincided with a period of rapid urbanization in America. The ballpark was a small slice of the country’s recent rural past, transplanted into the heart of the bustling new cities. The leisurely pace, the picnic-like occasion of going to a ballgame, evoked a simpler time.

But for a younger generation, the game is far too slow. There are highlight reel plays and astonishing feats of athleticism, but these merely punctuate long stretches of routine—pitch, hit, pitch, hit, double play, infield fly. The fan sees every event as part of a larger story: Ortiz is hitless with runners on second and third; Ortiz has a grudge against this pitcher after being beaned last season. But for the non-fan, this tension is invisible. The game is merely boring.

Major League Baseball has made some recent concessions towards shortening games, like reducing dead time between breaks, and forcing batters to keep a foot in the box between pitches. But these changes are at most marginal; the sport has stubbornly stuck to tradition. And I would not have it any other way.

To abandon baseball’s peculiarities in an attempt to boost its popular appeal would be disastrous. Baseball speaks to old-time values of fair play and equality. Pitch clocks and time limits are antithetical to its spirit—there’s no running out the clock; you have to give the other team a chance to bat. The philosopher Jacques Barzun once said that to know baseball was to know the “heart and mind” of America. Barzun wrote this in the 1950s, baseball’s high watermark. Today, if TV ratings were your only guide, football’s ritualized violence would reflect America’s soul.

There is a sad beauty to a fading thing. But, for the moment, summer is here—the ballparks are still filling up, and the smell of peanuts and hot dogs beckons. It may be the 9th inning, but there’s still plenty of baseball left to play.


Generic New York Times Opinion Column

“Witty epigram commenting about man’s place in modern society/the economy/education.”

Attribution to a dead white man who has two (or more!) initials instead of a first name. Broad, sweeping generalization about the human condition.

Cherry-picked statistic illustrating disturbing trend over recent years. Out-of-context quotation from prominent businessman/politician expressing agreement.

Citation of recent trendy publication—preferably by another Times author—concerning the problem. Expression of agreement with a niggling qualification.

Anecdote about the Way America Was. Nostalgia for older era of American society/education (I’m old as dirt!). Slight qualification to show that I’m still hip, with it, a true hoopy frood (people still say that, right?).

Simplified three-point argument, clearly enumerated for the dolts who actually flip to the opinion pages.

Classical allusion to show that I’m better-educated than these dolts.

Facile technological/business-related bullet point to reveal that maybe I’m not.

Quick third point to round out the argument, preserve the symmetry of the thing, and wrap this all up in a nice ribbon to share at cocktail parties. (See what I did in that last sentence?)

One more padded quote to get me to 800 words, and another New York Times column complete—cha-ching, baby!


A Salute to an Icon

This February saw the passing of Leonard Nimoy, known to millions of “Star Trek” fans as Mr. Spock. If “Trek” is a religion, for its legions of adherents (including this author), Spock is something of a patron saint. Spock’s death at the end of “Star Trek II” inspired more anguish than any fictional death since that of Sherlock Holmes; the death of Nimoy, Spock in the flesh, has left a wound in the imaginations and hearts of millions of fans.

For Nimoy, this mythological status was at times a reluctant burden. His 1975 memoir, “I Am Not Spock”, attempted to disentangle the man from his character—and was summarily rejected by diehard Trekkers. But slowly, inevitably, Leonard Nimoy became inseparable from Mr. Spock, a fact that Nimoy first accepted then (at least publicly) celebrated. A contrite follow-up, “I Am Spock”, was published in 1995.

Perhaps it is not surprising that Nimoy, who had serious artistic ambitions outside of Trek, once wished to disassociate himself from the pointy-eared Vulcan. Most critics quickly dismiss Star Trek as camp. Often the show’s idealism became preachy and the sci-fi allegories stupidly transparent (cf. Nazi Planet and Garden of Eden Planet). On occasion, Nimoy’s costar Shatner chewed up the scenery faster than the planet-killer from “the Doomsday Machine”. And after it came out in 1977, “Star Wars”’s spectacular space battles made Trek’s styrofoam rocks and velour costumes look cheap.

But “Star Trek” has an earnest, unvarnished optimism—absent from the slickly commercial “Star Wars”—that defies cynicism. The profound loyalty that “Trek” commands from millions of intelligent fans suggests that there is something artistically honest and essentially true about it. And whatever that thing is, Leonard Nimoy’s thoughtful, serious portrayal of Spock certainly had a lot to do with it.

Spock could very easily have been the emotionless automaton that he is often reduced to in “Trek”’s numerous parodies. But Nimoy gave Spock life, an understated wit, and a comforting wisdom that reached beyond the bounds of cold logic. Spock was no paragon of pure reason; his half-human heritage gave him emotions that his logical Vulcan side could not hide. What critic Roger Ebert called “Star Trek”’s essential question—the line between what is human and what is not—found its central battleground in the soul of Mr. Spock.

Watch Nimoy’s performance in “This Side of Paradise”, an episode where psychedelic spores—these were the 60s, after all—induce Spock to finally let loose and fall in love. When cured of the spores’ effect and asked about his experience, Spock wistfully responds: “I have little to say about it, captain. Except that for the first time in my life, I was happy.” At that moment, years of repressed feelings subtly lurk beneath Nimoy’s gaunt face.

Or see “Amok Time”, the classic episode that introduced the Vulcan salute and the accompanying “live long and prosper” (the former was Nimoy’s own invention, based on a Jewish priestly gesture of blessing). After a rare defeat in a battle of wits, Spock wryly notes to his victor that “having is not so pleasing a thing after all as wanting. It is not logical, but it is often true.”

As the series went on, Spock grew with Nimoy, and Nimoy with Spock; in a late interview, Nimoy noted that Spock’s reason and calm influenced his own personality. Spock the character eventually learned to balance logic and his emotions, maturing from a scientist suspicious of his feelings into a sort of Stoic sage. In classic “Star Trek” fashion, the answer to conflict was not total victory (a la “Star Wars”’s fable of good and evil) but understanding.

That Spock’s catchphrases have become so ubiquitous, his mannerisms instantly recognizable, has obscured how complex a character he was. Here was a character richly drawn. Spock’s reconciliation of his human and Vulcan halves formed the heart of the Enterprise’s original voyage. Nimoy’s inspired portrayal bridged another divide—the gap between camp and art, between the mundane and the sublime.


My Objection to Moral Objections to the Ice Bucket Challenge

I’m a contrarian by nature. If something is trending on social media, my natural inclination is to figure out what’s wrong with it, and hopefully spoil some of the fun (I can be a real sourpuss). For instance, here are two goofs in everyone’s favorite movie, Frozen:

Elsa's braid passes through her arm

Elsa’s braid passes through her arm

Kristoff's thumb clips into Anna's side

Kristoff’s thumb clips into Anna’s side

So you can imagine my frustration when the ALS Ice Bucket Challenge exploded over my Facebook newsfeeds, and I was nominated (more on that later). As is my wont, I began reading up on some of the contrarian literature to figure out how to respond.

But I found myself annoyed by the arguments and the poor philosophy involved, so much so that I dumped a bucket of ice over my head. This piece by William MacAskill for Quartz, in particular, gets it seriously wrong.

MacAskill writes:

The key problem [with the Ice Bucket Challenge] is funding cannibalismThat $3 million in donations doesn’t appear out of a vacuumBecause people on average are limited in how much they’re willing to donate to good causes, if someone donates $100 to the ALS Association, he or she will likely donate less to other charities… Research from my own non-profit… has found that, for every $1 we raise, 50¢ would have been donated anyway… So, because of the $3 million that the ALS Association has received, I’d bet that much more than $1.5 million has been lost by other charities.

Sure, I buy the crowding out argument—it seems reasonable that people have a budget for charity, and increased giving to one area can mean less giving to another. But it ignores the possibility of expanding the overall pool of charitable funds. A campaign like the ALS Ice Bucket Challenge with a highly visible commitment mechanism (i.e. if you get tagged and don’t donate or pour ice over your head, you’re an asshole) seems to me like an excellent way to force people into giving when they ordinarily wouldn’t have. Implicitly, MacAskill even acknowledges this: if 50¢ on the dollar is crowding out other charities, then 50¢ of new money is going to charity. That’s $1.5 million that wouldn’t have gone to charity otherwise.

He adds:

Almost every charity does the same thing — engaging in a race to the bottom where the benefits to the donor have to be as large as possible, and the costs as small as possible We should be very worried about this, because competitive fundraising ultimately destroys value for the social sector as a whole. We should not reward people for minor acts of altruism, when they could have done so much more, because doing so creates a culture where the correct response to the existence of preventable death and suffering is to give some pocket change.

… There is a countervailing psychological force, called commitment effectsIf in donating to charity you don’t conceive of it as “doing your bit” but instead as taking one small step towards making altruism a part of your identity, then one good deed really will beget anotherThis means that we should tie new altruistic commitments to serious, long-lasting behavior changeRather than making a small donation to a charity you’ve barely heard of, you could make a commitment to find out which charities are most cost-effective, and to set up an ongoing commitment to those charities that you conclude do the most good with your donations

I find critiques like this odd, because they’re essentially critiques of human nature. I view the “competitive fundraising” and the “minor acts of altruism” as responses by charitable organizations to the constraints of human behavior. People have limited funds to give, and there are a lot of good causes out there, so of course there’s competition. Moreover, human beings are naturally myopic, prone to recency bias, and vulnerable to sensationalism, so charities respond by trying to catch people’s attention and attract one-time donations—the “minor acts of altruism” and loose “pocket change” that MacAskill derides. In this sense, the slickly-filmed Kony 2012 campaign and the viral ALS Ice Bucket Challenge campaigns were exceptionally well-designed.

But the correct response shouldn’t be to complain (as MacAskill does) about how people don’t make long-term commitments to charitable giving, because human beings don’t seem inclined to act in that way to begin with. Given the choice of trying to alter human nature or fitting the incentive structure of charity to fit human nature (á la Kony or ALS), I would prescribe the latter.

Prescriptive philosophical arguments should deal exclusively with the world we live in, not some moral fairyland where we can assume away problems of human nature. That’s the realm of economics.

And, in large part to raise awareness for Lou Gehrig’s Disease (and in small part to stand by my argument), here is me dropping a bucket of ice over my head.


Rule, Britannia? Do Hegemons Enforce Peace? (Paper)

May at Harvard means Reading Period, and Reading Period means writing papers and for studying exams. I took an economics tutorial this semester on the Political Economy of International Conflict, which I absolutely loved–it was a chance to combine two of my favorite subjects, economics and military history.

Hopefully some of that enthusiasm rubbed off on the final paper I wrote for the class, which is on hegemonic stability theory. In a nutshell, the idea is that if one country (a hegemon) is much more powerful than the others, this country’s ability to unilaterally impose costs on aggressors will deter smaller countries from going to war. In the paper, I set up a game theoretic model to express this a little more formally, and ran some regressions to test this hypothesis empirically.

Here is the link: Rule, Britannia? Do Hegemons Enforce Peace?

This was the first real independent research paper I’ve ever done, so it was actually a little exciting. (That last sentence takes second prize in “nerdiest sentences uttered by me this month”, after: “It sucks that Lucasfilm is making the Expanded Universe non-canon, because that means all my Star Wars books are obsolete”.) Some personal observations after writing it:

  • Once you get the hang of it, LaTeX makes your life a lot easier–it handles citations smoothly and makes equations less of a pain. For a brief moment, when I was running Stata and LaTeX at the same time to compile some regressions, I felt like an economics god. Then I remembered that these are the preferred tools of most academic economists. And I felt less cool.
  • I was really happy to get some statistically significant results in my analysis. But I definitely did feel the pressure of producing something that was statistically significant–it certainly wouldn’t have felt as good if I’d turned in a regression table bereft of those lovely asterisks. And this was only a term paper–imagine the pressure career academics must feel! No wonder science suffers from “exaggeration [and] cherry-picking“.
  • Academic writing is a very weird and very specific genre. The goal is clarity at the expense of all else, and unfortunately “all else” happens to include readability. At some point when I have more time I’ll try and recast this as a blog post, with less jargon (what is a dyad, anyway?) and a little more wit.
  • Nevertheless, I enjoyed writing this! There’s nothing more fun and satisfying than being creative and–and when it’s something you’re really interested in, all the better.

Thank you for reading. If you’re a Harvard student and read this on top of all the papers you have to cover, kudos to you. Now get back to studying!


The Worst Hyperinflation in World History

No real analysis here, but an interesting thing I picked up in my reading:

Weimar Germany is typically thought of as the worst modern example of hyperinflation. In terms of its political consequences, it certainly was disastrous. But the dubious honor of the worst inflation in world history actually goes to postwar Hungary:

When the war ended, the U.S. dollar traded at 1,320 pengös, already a severe collapse from the 5.4 pengös to the dollar of 1938… By the end of 1945 Hungarian prices had risen four hundred-fold, and the pengö had ropped to 290,000 to the dollar… By the end of July one U.S. dollar was worth five nonillion pengös (a nonillion is ten followed by thirty zeros). The government printing presses could not keep pace with the wild inflation, and at this point all of Hungary’s banknotes in circulation combined were worth one one-thousandth of an American cent (Frieden, Global Capitalism, p 273).

Scary stuff.


“House of Cards” Season 2 Review (Spoiler-Free)

Today I can happily say that I am free of my House of Cards addiction. This is, of course, only two days after the premiere of all of Season Two’s episodes on Netflix. Such a release strategy practically begs for binge watching, and I am one of its victims.

Of course, finishing a days-long TV binge inevitably leaves a feeling of emptiness inside. But after experiencing this season of House of Cards, I can definitively say that something felt missing.

Perhaps it’s the series’ lack of fleshed-out, sympathetic characters. Every player in House of Cards is either a puppet or a puppet-master, and every decision is made in cold, calculated self-interest. It’s hard to relate to anybody.

The main character, Vice President Frank Underwood, is not so much a character as a bulldozer, plowing over all the obstacles before him. His fourth wall-breaking asides only reinforce the sense that he operates on a plane above all his political adversaries. Underwood is played admirably by Kevin Spacey, who (alone among the cast) at least seems like he’s having fun.

Frank is half of the series’ most interesting relationship, the manipulative, Machiavellian marriage of Frank and Claire Underwood. Claire, played by Robin Wright, is even steelier than her husband, and she has some choice scenes early in the series to show off her hard-boiled ruthlessness.

So the series at least has two interesting leads—greater shows have been built on less. What gives?

Perhaps, then, it’s this season’s lazy plotting. The writers constantly dangle threads and allow them to meander between episodes, only to cut them off abruptly, almost casually—which makes you wonder why they bothered in the first place. A subplot about hackers and government surveillance (with some not-too-subtle digs at Edward Snowden and the NSA) is particularly cringe-worthy.

But (thankfully) these subplots only occupy half of the running time, leaving space for the series to focus on the Underwoods’ brutal rise to power. And this central plot line, I think, is why I chose to spend over ten hours in three days watching a flawed TV show.

Frank Underwood may be merciless, even unsympathetic, but (as even President Obama admits) he sure gets a lot done. He bullies Congress into passing crucial budget legislation, deftly manages a foreign policy crisis, and carefully undermines a domestic political threat. And I think that is the level that House of Cards operates at—fantasy and wish fulfillment. Even if Frank Underwood is a cold-blooded sociopath, it sure is satisfying to see someone knock Congressional heads together and bring a measure of order to the chaos of Washington.


Demography of Middle-earth: the Shire

This is the third part of my series on the demography of Middle-earth. The first part concerned Gondor, and the second part was about Rohan. I also posted recently about the economic impact of Smaug.

Map of the Shire

The Shire was the most populous region in the depopulated land of Eriador. Home of the “half-grown Hobbits, the hole-dwellers”, it was a gentle land, largely unknown to and unknowing of the wider world. Throughout the Third Age, the Hobbits lived a peaceful pre-industrial existence, tilling the fertile earth, growing pipe-weed, and staying out of the business of those they called the “Big People”.

In the prologue to the Lord of the Rings, Tolkien describes the Shire as being forty leagues from east to west, and fifty leagues from north to south–an area of roughly 18,000 square miles (a league is three miles). It was divided into four quadrants, or farthings: Northfarthing, Westfarthing, Eastfarthing, and Southfarthing. There were significant variations in climate between the farthings: for instance, Northfarthing was the only region that regularly saw heavy snowfall, while Southfarthing was warm enough to produce the best pipe-weed in the Shire.

Located in the Westfarthing, Michel Delving was the largest town in the Shire and, as the seat of the mayor, its de facto capital. Hobbiton, the home of Bilbo and Frodo Baggins, was also in the Westfarthing and near the geographical center of the Shire. Other larger settlements included Tuckborough, Bywater, and Frogmorton.

Bilbo Baggins

Bilbo Baggins and the out-of-place waistcoat

It’s no secret that Tolkien based the Shire on memories of the idyllic English countryside from his youth. What’s noted less often is how anachronistic the Shire is when compared to the rest of Middle-earth: Bilbo Baggins wears a waistcoat with gold buttons, when waistcoats were only invented in the 17th century; Bilbo also keeps a clock on his mantlepiece, when clocks that size were only built starting the 16th century; and Lobelia Sackville-Baggins carries an umbrella, an invention that came to England only in the mid-17th century. Indeed, the Shire seems closer to depictions of the semi-mythical Merry England of the 16th and 17th centuries than to the gritty, pre-Renaissance realms of Gondor and Rohan.

This series has thus far relied on historical parallels to make estimates about Middle-earth’s population. Gondor, if you’ll recall, was compared to Constantinople and Norman England; Rohan was compared to the Goths. We already know the place of comparison for the Shire–historical England, as Tolkien imagined it. Thus our comparison is wholly dependent on our choice of year.

Using any date after 1700 would  be a grave overreach: by this time the Agricultural Revolution in Britain was well underway. Jethro Tull invented the seed drill in 1701, and England’s population would skyrocket shortly afterward due to the growing food supply. Moreover, Tolkien was explicit about the Hobbits’ distaste for machinery; in the Prologue, he notes that “they do not and did not understand or like machines more complicated than a forge-bellows, a water-mill, or a hand-loom”. We will have to accept Lobelia’s umbrella as a fantastical anachronism–admittedly a small concession in a world of trolls and dragons.

Let’s first try the calculation with the population numbers from 1520, the earliest date in my 16th-17th century Merry England timespan with reliable figures. Around 1520, England and Wales’s rural population was approximately 1.82 million, which gives a population density of around 31 people per square mile. Translated to the Shire’s area, this yields a population of around 558,000. If we used England and Wales’s rural population from 1600 (2.87 million), the Shire’s estimated population shoots up to around 890,000.

My gut instinct is that this second estimate is too high; it can serve as our upper bound. Indeed, even the lower bound struck me at first as too large. However, after further thought, I think it can be justified. Tolkien is keen to note that the Shire is a rich and bountiful land, blessed with fertile earth that was carefully tilled even before the arrival of the Hobbits. Moreover, Hobbits lived much longer than their 16th-century human counterparts, regularly reaching the age of 100. With these considerations in mind, it seems plausible that the Shire could support a Hobbit-population of around half a million.


The Macroeconomic Impact of Smaug

Today, I’m taking a break from my demography of Middle-Earth series to tackle a problem in my chosen field of study–economics. More specifically, today’s post is about the macroeconomic impact of Smaug, the red dragon of Erebor, on the economy of Middle-Earth.

smaug-the-hobbit copy

Surprisingly, there already exists a sizable literature on this subject.

To summarize, the consensus is that the arrival of the great wyrm of the North was both a fiscal and monetary shock: fiscal, due to the enormous damage to the productive capacities of the people of Middle-Earth (the destruction of Dale and Erebor, the roasting and consumption of countless skilled Dwarven miners and smiths); and monetary, because of the abrupt removal of the biggest hoard of currency in Middle-Earth.

How big a hoard, you might ask?

According to one calculation, Smaug is more than 60 meters long, and has a wingspan of over 50 meters–by all measures, an impressive beast. But those who have seen the movie know that Smaug is easily dwarfed by his massive bed of gold and jewels; he’s able to lie completely hidden beneath his treasure hoard. And recall that in the film the Dwarves’ plan to rid themselves of the dragon involves <spoiler>drowning him in a pool of molten gold</spoiler>

Now, by one estimate, the sum total of all the gold mined in human history would form about a 25 meter cube, with a value of over $12.4 trillion dollars (using the present gold price of roughly $1240 USD per ounce). The treasure hoard of Smaug is many times that size, making the wealth of the dragon many times greater than the modern United States’s annual GDP! And that’s just considering the value of the gold; once you factor in the value of the countless jewels and gems of his hoard, Smaug’s wealth simply becomes incalculable (Forbes, eat your heart out).

It’s easily conceivable that the massive shock to the money supply caused by the loss of the hoard started a deflationary spiral in the surrounding area, resulting in a severe depression in economic activity. No wonder the area around the Lonely Mountain was so desolate!

An interesting follow-up question would be the effects of the return of the hoard to Middle-Earth’s economy with the eventual slaying of the dragon. The re-introduction of such a vast amount of currency to the money supply would almost certainly have a massive inflationary effect on the surrounding economy–certainly not a welcome development for the people of Esgaroth and Erebor trying to rebuild their homes in the wake of the dragon. To borrow a phrase–out of the frying pan, into the fire!

If you liked this post, you might also like my posts on the populations of Gondor and Rohan.


“The Hobbit: the Desolation of Smaug” Review

Tolkien fans who were shocked when Tom Bombadil was excised from the Fellowship of the Ring, beware – the Hobbit: the Desolation of Smaug may cause a minor cardiac event. In the eyes of this Tolkien fan, most of the changes are welcome, but make no mistake: Peter Jackson’s kinetic, action-driven trilogy has clearly departed from the mild bildungsroman written by an Oxford don. This is Jackson’s Middle-Earth now – we are only visitors.

But my, what a trip it is. No filmmaker at work today is able to evoke a living, breathing world with the authority of Peter Jackson. The sets are masterful and the CGI is seamless; every dollar of the $200 million budget is there on-screen. Of particular note are the Dickensian Lake-Town, as squalid as Minas Tirith was magisterial, and a nasty nest of forest spiders.

The main knock against the CGI is, unfortunately, Jackson’s tendency to overindulge. Occasionally the movie descends into physics-free, video game mayhem (recall the acrobatic goblin tunnels sequence from the previous movie for reference). This is entertaining, I’ll admit, but clashes with Jackson’s attempts to make the tone of his Hobbit darker and more thematically consistent with his Rings trilogy.

Now a word about those changes to Tolkien’s vision. Without venturing too far into spoiler territory, I can safely say that Jackson makes explicit some darker connections between the Hobbit and the Rings trilogy that were left buried in the books. Jackson had to up the stakes to justify the price of admission of three movies, but most of the pastoral, childlike wonder of the book is lost. Nor is the writing in this regard particularly deft – I found myself rolling my eyes when “evil” and “darkness” were spoken about in ominous tones for the hundredth time.

The addition of the elf warrior Tauriel (Evangeline Lilly), a character absent from the book, I feel was necessary; with Galadriel absent, she’s the only noteworthy female character in the almost three-hour film. In her gymnastic feats of orc-slaying she’s more than a match for the other chief elven character, the returning Legolas (Orlando Bloom). Interestingly, these two characters (neither of whom appear in the book) form two corners of an unexpected love triangle. Even more interestingly, I didn’t mind – the dialogue in the romance scenes is sprightly and provided a welcome relief from the endless orc-slaying.

Indeed, amidst all the swashbuckling and barrel-riding one can forget that there are some pretty fine actors in this film. Ian McKellen continues to embody Gandalf the Grey, and Stephen Fry find the right note of sleaziness as the venal Master of Lake-Town. In my review of an Unexpected Journey, I feel I unjustly overlooked Martin Freeman’s performance as Bilbo Baggins, the titular hobbit. Freeman is excellent. In this world of epic quests, sorcerers, and one magnificent dragon, he gives a naturalistic performance with an endearing set of tics and a unique, off-hand delivery which helps ground the fantasy.

But more about the dragon. Smaug is the best dragon ever committed to film, bar none. He moves with real weight and authority, and his sonorous rumblings – the digitally enhanced voice-work of Benedict Cumberbatch – are appropriately terrifying. Smaug’s conversation with Bilbo, like the riddle sequence with Gollum in the previous film, is easily the best part of the movie. Smaug comes in only during the last third, just when my focus began to drift – but then he grabs hold of it, as only a thousand-ton dragon can, and doesn’t let go until the film’s abrupt and anticlimactic end.

It’s rare that a movie with a running time close to three hours can hold me in rapt attention. I complained that an Unexpected Journey could have easily lost half an hour on the cutting-room floor; it would be difficult to identify parts of this movie that could be cut without loss. The film moves briskly, sometimes too briskly, and the ending feels more like a setup for the next film than a proper emotional release.

But, like its predecessor, the delights of Peter Jackson’s Middle-Earth far exceed the faults. See it for the experience of a cinematic universe better-realized than any other, and for the work of a director who clearly loves his subject and his craft.

And for that cunning, fiery dragon at the end.