Sunday, December 19, 2010

Q: How do you write a strong lede for an article?

Practice! And read the best stories in print. And read Blundell's "The Art and Craft of Feature Writing." Go look up old stories and see how they were constructed. Compare the NYT, WSJ, Washington Post and L.A. Times on the same story in history and see the choices each one made. More practice.

In straight news, you usually want to give the most important part of the story as clearly and succinctly as possible:

President John Fitzgerald Kennedy was shot and killed by an assassin today. (Tom Wicker, The New York Times, Nov. 23, 1963)


Men have landed and walked on the moon. (John Noble Wilford, The New York Times, July 21, 1969)


Five men, one of whom said he is a former employee of the Central Intelligence Agency, were arrested at 2:30 a.m. yesterday in what authorities described as an elaborate plot to bug the offices of the Democratic National Committee here. (Alfred E. Lewis, The Washington Post, June 18, 1972)


Doctors in New York and California have diagnosed among homosexual men 41 cases of a rare and often rapidly fatal form of cancer. (Lawrence K. Altman, The New York Times, July 3, 1981)


Enron Corp. filed for protection from creditors in a New York bankruptcy court, the biggest such filing in U.S. history. (Rebecca Smith, The Wall Street Journal, Dec. 3, 2001)


Months after the Sept. 11 attacks, President Bush secretly authorized the National Security Agency to eavesdrop on Americans and others inside the United States to search for evidence of terrorist activity without the court-approved warrants ordinarily required for domestic spying, according to government officials. (James Risen and Eric Lichtblau, The New York Times, Dec. 16, 2005)

I often found that homing in on the "most important part," especially for a complex story, needed some repose -- sometimes you don't see what's in front of your face when you have been chasing it for two months. Some of our editors were fantastic midwives, helping reporters get to the meat of the story. For me, there was nothing like trying to discuss the story with someone else to clarify my thoughts.

Of course there is also nothing like trying to pound it out at 3:55 p.m. with an editor screaming for copy! That, of course, is also a way to improve.

(And to be clear -- by the time you see a lede on page 1 of a national newspaper, at least four people have had their hands on it, if not eight on a bad day.)

In contrast to straight news, the lede is much less constrained in a feature story. More than a few books have been written about feature writing (Blundell's being one of the best) and it is an art! The best leads edify, amuse, and still get to the point:

To you, it's just a backhoe, a hulking mass of metal with a big bucket attached. But to Harvey Neigum, it's an extension of his soul. So here he is, after years of practice, climbing into his machine with the championship at stake.

It has all come down to this: Can Mr. Neigum make his eight-ton backhoe do the moonwalk? (John R. Wilke, The Wall Street Journal, Jan. 7, 1992)


Everything is bigger in Texas, even 10%.

Prominently displayed in Shirley Faske's office at Westlake High School is a notice advising students that they must rank "in the top 10%" of their graduating class to gain automatic admission to a Texas public university.

But last year, suburban Westlake crammed 63 of its 491 seniors, or 12.8%, into the top 10th, violating the laws of mathematics -- and of the Lone Star State. (Daniel Golden, The Wall Street Journal, May 15, 2000)


Here, amid the cracked earth and grizzled acacias of northwestern Kenya, rumors were running rampant about North Dakota.

Dozens of boys crowded around the UN compound where someone, somewhere, held a list of the US cities where they might be offered homes. An older boy asserted that North Dakota is colder than Nairobi, but this was impossible to confirm. Another was enraptured with the idea of Albany, and dreamily repeated the phrase “Albany, New York. Albany, New York,” a spot whose distance he estimated at a million, or possibly 2 million, kilometers.

And a 17-year-old, John Deng, had his heart set on Chicago, having learned that it is home to an abundance of bulls. To the son and grandson and great-grandson of cattle herders from the Dinka tribe — men who still sing adoring songs about the horns of their favorite oxen — Chicago has enormous appeal.

“I see that on some shirts, like Chicago Bulls. We believe that in Chicago we will have a lot of bulls,” said Deng, a young man with a gap-toothed smile who speaks a formal English akin to that of a BBC announcer.

Within hours, however, Deng would be told the name of a place that suggests a landscape without cattle: Arlington, Mass. It would mean nothing to him.

The flights to America are leaving every day now, screaming out of the bush in a huge cloud of orange dust, as the great migration of the group known as the Lost Boys of Sudan gets underway. Heads down, barefoot except for shower thongs, the departing boys file into the aircraft as grave as spacemen, sometimes without even looking back at the friends standing five deep against the barbed wire. (Ellen Barry, The Boston Globe, Jan. 7, 2001)


A bad taste lingers in the mouths of many mall walkers.

''Oh, yes, I can still taste it,'' said Mabel Mickle, 71, a retired financial analyst who for eight years has been walking for exercise in Evergreen Plaza, an enclosed shopping center in this southwestern suburb of Chicago.

There are some 1,500 malls in the United States, and most of them open early each morning to legions of sneaker-shod, hyper-organized and often elderly mall walkers. But Evergreen Plaza tried to go boldly where no mall had gone before. It all started in February when management sent out this notice: ''The mall will no longer be available to walkers."

Bruce Provo, managing partner of a company that owns the mall and the one who ordered its walkers into exile, said in his lockout order, ''We can no longer turn a blind eye to the realities of the world we live in.'' Those realities included mall walkers who muddied freshly buffed floors, hogged prime parking and demanded free Christmas gifts, Mr. Provo said in an interview.

''It got out of control from a standpoint of entitlement,'' he said. ''Predominantly they are seniors, O.K., and seniors are not great spenders, are they?''

Burned by a firestorm of bad publicity, boycott threats and patron poaching from nearby malls, Mr. Provo, 50, has been forced to retreat.

Across the United States, there are mall retailers who regard walkers as marginal shoppers, said Malachy Kavanagh, a spokesman for the International Council of Shopping Centers, a trade group based in Manhattan. But with the conspicuous exception of Mr. Provo, most dare not say so in public. Instead, malls from Maine to California, from Florida to Washington State, embrace their walkers in the name of community relations, comforting them with free coffee, shopping discounts and monthly blood pressure checks.

The antiwalker war that Evergreen Plaza fought so aggressively and then lost so pathetically demonstrates one of the hard realities of climate-controlled retail: In a nation that grew up in the mall and is now growing old there, mall walkers rule. (Blaine Harden, The New York Times, Aug. 28, 2001)



THE delicate posturing began with the phone call.

The proposal was that two buddies back in New York City for a holiday break in December meet to visit the Museum of Modern Art after its major renovation.
"He explicitly said, 'I know this is kind of weird, but we should probably go,' " said Matthew Speiser, 25, recalling his conversation with John Putman, 28, a former classmate from Williams College.

The weirdness was apparent once they reached the museum, where they semi-avoided each other as they made their way through the galleries and eschewed any public displays of connoisseurship. "We definitely went out of our way to look at things separately," recalled Mr. Speiser, who has had art-history classes in his time.

"We shuffled. We probably both pretended to know less about the art than we did."

Eager to cut the tension following what they perceived to be a slightly unmanly excursion - two guys looking at art together - they headed directly to a bar. "We couldn't stop talking about the fact that it was ridiculous we had spent the whole day together one on one," said Mr. Speiser, who is straight, as is Mr. Putman. "We were purging ourselves of insecurity."

Anyone who finds a date with a potential romantic partner to be a minefield of unspoken rules should consider the man date, a rendezvous between two straight men that is even more socially perilous. (Jennifer 8. Lee, The New York Times, April 10, 2005)


What time is it when the clock strikes half past 62?

Time to change the way we measure time, according to a U.S. government proposal that businesses favor, astronomers abominate and Britain sees as a threat to its venerable standard, Greenwich Mean Time.

Word of the U.S. proposal, made secretly to a United Nations body, began leaking to scientists earlier this month. The plan would simplify the world's timekeeping by making each day last exactly 24 hours. Right now, that's not always the case. (Keith J. Winstein, The Wall Street Journal, July 29, 2005)



ON A SUMMER DAY IN 2002, shares of Affiliated Computer Services Inc. sank to their lowest level in a year. Oddly, that was good news for Chief Executive Jeffrey Rich.

His annual grant of stock options was dated that day, entitling him to buy stock at that price for years. Had they been dated a week later, when the stock was 27% higher, they'd have been far less rewarding. It was the same through much of Mr. Rich's tenure: In a striking pattern, all six of his stock-option grants from 1995 to 2002 were dated just before a rise in the stock price, often at the bottom of a steep drop.

Just lucky? A Wall Street Journal analysis suggests the odds of this happening by chance are extraordinarily remote -- around one in 300 billion. The odds of winning the multistate Powerball lottery with a $1 ticket are one in 146 million. (Charles Forelle and James Bandler, The Wall Street Journal, March 18, 2006)


Unfortunately, sometimes a lede can be "too good to check" as the saying goes:

Ian Restil, a 15-year-old computer hacker who looks like an even more adolescent version of Bill Gates, is throwing a tantrum. "I want more money. I want a Miata. I want a trip to Disney World. I want X-Man comic book number one. I want a lifetime subscription to Playboy, and throw in Penthouse. Show me the money! Show me the money!" Over and over again, the boy, who is wearing a frayed Cal Ripken Jr. t-shirt, is shouting his demands. Across the table, executives from a California software firm called Jukt Micronics are listening--and trying ever so delicately to oblige. "Excuse me, sir," one of the suits says, tentatively, to the pimply teenager. "Excuse me. Pardon me for interrupting you, sir. We can arrange more money for you. Then, you can buy the comic book, and then, when you're of more, say, appropriate age, you can buy the car and pornographic magazines on your own."

It's pretty amazing that a 15-year-old could get a big-time software firm to grovel like that. What's more amazing, though, is how Ian got Jukt's attention--by breaking into its databases. In March, Restil--whose nom de plume is "Big Bad Bionic Boy"--used a computer at his high school library to hack into Jukt. Once he got past the company's online security system, he posted every employee's salary on the company's website alongside more than a dozen pictures of naked women, each with the caption: "the big bad bionic boy has been here baby." After weeks of trying futilely to figure out how Ian cracked the security program, Jukt's engineers gave up. That's when the company came to Ian's Bethesda, Maryland, home--to hire him.

And Ian, clever boy that he is, had been expecting them. "The principal told us to hire a defense lawyer fast, because Ian was in deep trouble," says his mother, Jamie Restil. "Ian laughed and told us to get an agent. Our boy was definitely right." Ian says he knew that Jukt would determine it was cheaper to hire him--and pay him to fix their database--than it would be to have engineers do it. And he knew this because the same thing had happened to more than a dozen online friends. (Stephen Glass, The New Republic, May 18, 1998)

Wednesday, September 15, 2010

Q: In what ways are the US News & World Report rankings for colleges flawed?

About a quarter of the U.S. News formula is an opinion poll of university administrators (presidents, provosts and deans) and high school college counselors about their views on the reputations of the colleges.

One criticism would be, does this really speak highly for the validity of the results, when 23% of the result comes from administrators at competing universities and high school employees? Does a 45-year-old guidance counselor at Evanston high school or a 60-year-old dean at the University of Chicago really have any idea whether you'll get a better undergraduate education at Stanford, Harvard, Penn or Yale if you go there in 2011?

And, of course, can a national university really have a single, unitary reputation score? Surely the kind of student who would thrive at Caltech (the #1 school in the country a decade ago, despite offering no BA degree) is not the same as the student who would thrive studying medieval literature at Yale.

But #2, like all components of the U.S. News formula, there is no margin of error on the results of the opinion poll! The rankings are calculated as if every input -- the competitor and high-school employee view of a school's "reputation," its graduation rate, the average class size -- were absolutely certain. That is not so. 

In addition to statistical error, there's also a substantial systematic error in some of the parameters -- e.g. the "average class size" has a lot of slop in what you count as a class (just lectures? lectures and discussion sections? lectures, discussion sessions, and tutorials?). So does the graduation rate, etc. These figures should have error bars on them too.

I have discussed this briefly with Bob Morse, the guy at U.S. News who calculates the rankings, but he wasn't receptive to the idea that they should put appropriate error bars on all the inputs and propagate the uncertainty to the outputs, marking statistical ties as appropriate. (I suspect these statistical ties might cross substantial swaths of the final rankings, which may partly explain why U.S. News wouldn't be excited to try to sell magazines with that technique -- who wants to announce a nine-way tie for 1st place?) His position was that they assume the data coming from the schools is right, and they don't waste time worrying about what the rankings would be if the supplied figures weren't right.

Sunday, September 12, 2010

Q: Could a flatlander utilize flatland technology to perceive something in the third dimension?

There are some big differences between living in a plane, versus living in Euclidean 3D and merely being able to perceive only a plane slice. Here's one: the wave equation in three dimensions is "nondispersive," meaning all frequencies travel at the same speed and the region of influence of an impulse is just an expanding circle. So if you say something 100 meters away, or transmit a signal from your radio, I will hear a delayed, quieter version of what you said.

In two dimensions (or any even number of dimensions), the wave equation is dispersive! You do not just hear a delayed, quieter version of stimuli at a distance; the sound itself is actually changed by traveling through the medium. The region of influence isn't a circle; it's a filled-in circle. Imagine throwing a rock into a pond -- the resulting ripples aren't just at the outside of the circle. The rock continues influencing the interior of the circle even after the first news of its plop has passed by. Similarly, the sound of thunder, constrained to a near-2D slice of the earth's atmosphere, is not just a bang when you hear it from far away. It gets transformed into a rolling sound because the different frequencies travel at different speeds.

So a flatlander who understood partial differential equations (this may be a small number of them...) could distinguish between these two possibilities (2d world, versus 2d slice of 3d world) by observing the behavior of waves as they propagate.

(NB: The proof that the wave equation is dispersive in even-dimensional Euclidean spaces and nondispersive in odd-dimensional spaces is really frickin' hard! I have seen it in a monograph and did not understand it at all.)

Thursday, August 19, 2010

3G and me

In 2002, I got my first cell phone.
June was stuffy in Manhattan, and my summer internship copy-editing the New York Sun, the now-defunct right-wing newspaper, was just about to start. I swam through the humid air past Madison Square Park to get to the store before closing.
"You want this one," said the salesman at the RadioShack, pointing to a sleek model then on sale. "It's a 3G phone. It'll work with Sprint's new 3G network they're rolling out later this summer."
"Ok," I said. Sure enough, it had 3G:
Sanyo SCP-6200. QUALCOMM 3G CDMA
Fig. 1: Sprint's Sanyo 3G phone, circa mid-2002. An orange of more recent vintage looks on.
A few months later -- after all the Sun's editorials casting doubt on whether lead paint can really poison you had been edited and sent off to our eight readers, and I was back at school -- Sprint did roll out their 3G network:
Sprint launched nationwide 3G service in the 2002 third quarter. The service, marketed as "PCS Vision", allows consumer and business customers to use their Vision-enabled PCS devices to take and receive pictures, check personal and corporate e-mail, play games with full-color graphics and polyphonic sounds and browse the Internet wirelessly with speeds up to 144 kbps (with average speeds of 50 to 70 kbps).
I called Sprint and tried to subscribe. "Sir, you need a 3G phone to sign up," they told me.
"I have one!" I said proudly. "It says 3G CDMA right on the back!"
"Oh, I'm sorry sir. We've changed the labeling of that model. That phone doesn't have true 3G. It doesn't say that on the back any more. If you like I would be happy to sell you the next model, the SCP-6400, which has true 3G."
"No, thanks," I said, thinking that 3G was pretty much a crock, while wryly appreciating RadioShack's ability to make you feel cheated even on a $30 cellphone.
Sure enough, when my phone died and had to be replaced, I saw the new one only said "QUALCOMM CDMA" -- no more "3G". It had been revised downward.
Meanwhile, Sprint's competitors were busy deploying their own nationwide 3G networks. Cingular, then a joint venture of SBC and BellSouth, trumpeted each step in the process:
June 2003:
ATLANTA, June 30 -- Cingular Wireless today announced the world's first commercial deployment of wireless services using Enhanced Datarate for Global Evolution (EDGE) technology. Cingular's initial EDGE service offering is in its Indianapolis market, with subsequent deployments expected later in the year.
Building on more than a decade of wireless data experience, Cingular's EDGE technology enablestrue "third generation" (3G) wireless data services with data speeds typically three times faster than those available on GSM/GPRS networks.

Or October 2003:
Cingular began offering its 3G service EDGE (Enhanced Datarate for Global Evolution) in Indianapolis in July, becoming the first commercial wireless company in the world to offer the service.

Or June 2004:
This year, further enhancements have been made to the network with the launch of EDGE in Connecticut, a high-speed wireless data service which gives customers true "third generation" (3G)wireless data services with data speeds typically three times faster than what was available on GPRS.
Those of you who care about these things will probably be jumping up and down right now, and/or closing the browser window. "EDGE isn't 3G!" you are saying. "It's 2.9G at best! And neither is 1xRTT, which is all the Sanyo SCP-6200 had. That's barely 2.5G! Maybe 2.75G on a clear day."
These people, who while enthusiastic sometimes seem to have been born yesterday, would point to the kerfuffle when Apple released the original iPhone in 2007 for Cingular and only supported EDGE. As the Wall Street Journal wrote:
Detractors and fans are going toe to toe on online forums. Much of the latest criticism is zooming in on Apple's choice of technologies to use with the new phone and its decision to partner exclusively with AT&T Inc.'s Cingular Wireless, which is being rebranded as AT&T.
For example, the iPhone won't use the fastest wireless Internet connection available, relying on so-called second-generation, or 2G, rather than faster 3G networks now being rolled out by major wireless carriers. Because of this, industry experts expect features of the iPhone such as Web browsing and downloading not to be very fast.
Tim Cook, Apple's chief operating officer, said during a conference call with analysts yesterday the company is sold on Cingular's 2G EDGE network because "it's much more widespread and widely deployed in the U.S." Mr. Cook didn't comment on whether Apple will eventually support 3G but said, "Obviously we would be where the technology is over time." Some people refer to EDGE as 2.5G.
By 2007, Cingular/AT&T was happy to downgrade its EDGE offerings in favor of a newer kind of 3G (known as W-CDMA or UMTS). From an interview with AT&T's chief, Randall Stephenson, in the New York Times in June 2007:
''I got to tell you, carrying this thing around and experiencing those kinds of speeds on a wireless handset, your imagination begins to run in terms of what's possible,'' he said, ''and by the way, there's not a 3G network available in Ottumwa, Iowa,'' referring to the so-called third generation of Web-enabled cellphones that require faster networks. ''If you want to sell these devices in a variety of places, Edge is the only opportunity you have.''
AT&T has invested $16 billion in its network over the last two years, and the network is now designed to handle the expected increase in wireless data users, he said, adding: ''Capacity won't be an issue. The network is ready.''
Ok, what are some quick takeaways here?
  • What Sprint sold as "3G" in 2002 (1xRTT voice), it rescinded later that year and relabeled the phones.
  • What counted as "3G" for Sprint in 2003 (1xRTT data), isn't any more either.
  • What in 2004 constituted "true 'third generation' (3G)" to Cingular/AT&T, the company had retroactively downgraded to 2G or 2.5G or 2.9G by 2007.
  • From an engineer's perspective, the 3G interfaces, if you read a book on telecom engineering, are CDMA2000 (including 1xRTT and EV-DO), EDGE, and W-CDMA (including UMTS, with or without HSUPA and HSDPA). The International Telecommunications Union has published a standard for third-generation wireless communications, known as IMT-2000, that includes those three and a few others.
  • To a first approximation, the first launch of "3G" in the United States, around 2002 and 2003, was a dud. The carriers responded by dusting themselves off, redoubling their efforts, deploying a new thing and retroactively downgrading their old "3G" product to be... some smaller number of G's. "3G" itself it not a technical term with a whole lot of meaning, especially as it lumps together so many incompatible, competing air interface protocols. The situation for consumers was less confused in Europe, where GSM and W-CDMA are dominant, governments auctioned new frequencies set aside for "3G," and the carrier offerings were more distinct.
  • The same song-and-dance is likely to play out over "4G" -- a term that engineers tentatively apply to aforthcoming ITU standard called IMT-Advanced, and carriers apply to whatever they want you to buy now. You might notice that Sprint is currently selling Mobile WiMAX as "4G." Mobile WiMAX is part of IMT-2000 -- the 3G standard. Verizon Wireless is selling something called "LTE" as "4G" -- it ain't in IMT-Advanced either. Today's "4G" products are like the "3G" of 2002 and 2003 -- they will become "3.75G" as soon as the next hot thing comes out.
But the point I really want to make is: this is all a red herring. Focusing on the protocol between your cell phone and the tower -- or worse, spending money on that basis -- is letting yourself be distracted. It's like the secret pick-me-up in Geritol, concocted by Madison Avenue instead of a chemist.
A cell phone is essentially sharing a swath of radio spectrum with a bunch of other people within a cell. Think of it like a cable modem or any other ISP. You can have the world's most sophisticated modem, but if it's trying to talk in a tiny slice of spectrum shared with everybody else within miles around (because there aren't enough towers to divide you up into cells), it'll still be awful.
Consider, for example, the performance I get from a Verizon "3G" USB modem: 3060 packets transmitted, 3007 received, 1% packet loss, time 3061925ms rtt min/avg/max/mdev = 121.554/404.199/22307.520/1213.055 ms, pipe 23
Pretty sad! But hey, it's 3G. In truth, a lot of boring factors control the performance of your cell phone data transmissions, principally:
  1. How much spectrum has the carrier licensed in my city, and how much is allocated to this kind of modulation?
  2. How many other people am I sharing the local tower with? In other words, how big is my cell, and how many towers has the carrier built or contracted with?
  3. How much throughput are my cellmates trying to consume?
  4. How much throughput has the carrier built in its back-end network connecting to the tower?
You might notice that all of these meat-and-potatoes factors involve the carrier spending money, and they all involve gradual improvement in behind-the-scenes infrastructure that's hard to get customers excited. Persuading you to buy a new cell phone with a sophisticated modem and sign up for a two-year contract is a different story. So they don't sell you something measurable where they could be held accountable; they sell how sweet it feels to be using a sophisticated radio modem protocol to talk to them.
Don't get me wrong -- UMTS and EV-DO are sophisticated protocols, and a lot of smart people and clever techniques made them legitimate engineering accomplishments. But the boring factors -- the raw resources being shared among the nearby customers -- dictate your performance just as much as incremental improvements in the air interface. What we really ought to care about is the same as with any Internet service provider -- the throughput and latency and reliability you get to the endpoints you want to reach. That's what matters, not the sophistication of one piece of the puzzle.
If the carrier sold you "384 kbps Internet access anywhere in the coverage area, outdoors," that would be something you could hold them accountable for. The carrier might even have to put a brake on signing up new customers until it could build new towers or license more spectrum for everybody to share, if it made that guarantee.
Some have proposed even more freely enterprising business models -- like having your phone get minute-to-minute bids from the local towers on who will carry your traffic for what price, and accept the lowest bidder who offers acceptable performance.
Selling you "3G" -- well, that's a lot easier to live up to. And it changes every year. So don't tell me how many G's your new phone has. We've loved and lost so many G's at this point. Tell me you got a new phone where you pay to get 1 Mbps and 100 ms rtt to major exchange points. When the market moves forward enough to make that a reality, that'll be a generation worth celebrating.

Friday, July 23, 2010

Q: What are some good day trips around Boston?

  • Take the ferry or sail to Spectacle Island, visit the museum on the island's history, hike up to the top of the island (the highest point in the harbor), enjoy the view, picnic.
  • Bike the Minuteman Trail from Alewife to Bedford, continue to Concord center, enjoy the old New England town, eat lunch and go antiquing/bookshopping, continue to Walden Pond, walk around, see Thoreau's house, swim in the pond.
  • Drive out to Sterling and navigate the Davis Mega Maze. Stop on the way back to go apple-picking and drink fresh cider.
  • Walk the Freedom Trail and stop in at some of the historical sights. Walk Newbury Street and all the touristy shops.
  • Rent a kayak in the Broad Canal (in Cambridge) and go up and down the Charles River. (http://www.paddleboston.com/kend...) Dock it at the kayak dock at the North Point Park, one of the area's most manicured and beautiful parks (built at a cost of tens of millions as part of the Big Dig to mollify environmentalists) that's also usually deserted and one of the most difficult to get to (both of the bridges originally planned to access the park were cut from the Big Dig after it ran over budget; one is now under construction again with federal stimulus money). Explore the park, kayak around the artificial islands, and try your luck at the incredibly dangerous but fun adult "goth" playground that tries to kill you and the spinny things on the intermediate playground. (http://www.yelp.com/biz/north-po...) Kayak back.
  • Tuesday, June 29, 2010

    Q: What advice would you give a college freshman in the Boston area to make life easy, fun, and successful?

    Get out and enjoy all that Boston has to offer! Spend the summers here -- Boston is beautiful in the summer. Get a guidebook and peruse it. Read the Phoenix/Weekly Dig/Improper Bostonian to know what's going on. Go picnic on the Boston Harbor Islands (there are fast ferries to Spectacle and Georges), get a bicycle and take it everywhere (Boston is a bike-friendly town despite its reputation for crazy drivers, mostly because everything is so close together), learn to sail on the Charles River, ride the Minuteman trail, do the corn maze in the fall at Davis Mega Maze, go skinny-dipping (or normal-dipping) in Walden Pond, walk the Freedom Trail and Newbury Street, volunteer at a high school, relax at Tosci's, go to weird plays (at Mary O'Malley park in Chelsea there are free plays in the summer, plus Back Bay and in the theater district), join ubernerd clubs like the Amateur Telescope Makers of Boston, acquire the local passion for the Red Sox, attend the film festivals and movies at the Somerville theater or the Brattle or Coolidge Corner or the Landmark or Harvard or MIT, go camp out at 9 a.m. on July 4th for a spot on the Esplanade to picnic and watch the Boston Pops concert and fireworks over the river with 500,000 other people.

    To be honest, most of the cool stuff I love about Boston I didn't discover until I had been here for like five years and started venturing more off the campus.

    Sunday, June 13, 2010

    Q: What is the difference between Bayesian and frequentist statistics?

    Mathematically speaking, frequentist and Bayesian methods differ in what they care about, and the kind of errors they're willing to accept.

    Generally speaking, frequentist approaches posit that the world is one way (e.g., a parameter has one particular true value), and try to conduct experiments whose resulting conclusion -- no matter the true value of the parameter -- will be correct with at least some minimum probability.

    As a result, to express uncertainty in our knowledge after an experiment, the frequentist approach uses a "confidence interval" -- a range of values designed to include the true value of the parameter with some minimum probability, say 95%. A frequentist will design the experiment and 95% confidence interval procedure so that out of every 100 experiments run start to finish, at least 95 of the resulting confidence intervals will be expected to include the true value of the parameter. The other 5 might be slightly wrong, or they might be complete nonsense -- formally speaking that's ok as far as the approach is concerned, as long as 95 out of 100 inferences are correct. (Of course we would prefer them to be slightly wrong, not total nonsense.)

    Bayesian approaches formulate the problem differently. Instead of saying the parameter simply has one (unknown) true value, a Bayesian method says the parameter's value is fixed but has been chosen from some probability distribution -- known as the prior probability distribution. (Another way to say that is that before taking any measurements, the Bayesian assigns a probability distribution, which they call a belief state, on what the true value of the parameter happens to be.) This "prior" might be known (imagine trying to estimate the size of a truck, if we know the overall distribution of truck sizes from the DMV) or it might be an assumption drawn out of thin air. The Bayesian inference is simpler -- we collect some data, and then calculate the probability of different values of the parameter GIVEN the data. This new probability distribution is called the "a posteriori probability" or simply the "posterior." Bayesian approaches can summarize their uncertainty by giving a range of values on the posterior probability distribution that includes 95% of the probability -- this is called a "95% credibility interval."

    A Bayesian partisan might criticize the frequentist confidence interval like this: "So what if 95 out of 100 experiments yield a confidence interval that includes the true value? I don't care about 99 experiments I DIDN'T DO; I care about this experiment I DID DO. Your rule allows 5 out of the 100 to be complete nonsense [negative values, impossible values] as long as the other 95 are correct; that's ridiculous."

    A frequentist die-hard might criticize the Bayesian credibility interval like this: "So what if 95% of the posterior probability is included in this range? What if the true value is, say, 0.37? If it is, then your method, run start to finish, will be WRONG 75% of the time. Your response is, 'Oh well, that's ok because according to the prior it's very rare that the value is 0.37,' and that may be so, but I want a method that works for ANY possible value of the parameter. I don't care about 99 values of the parameter that IT DOESN'T HAVE; I care about the one true value IT DOES HAVE. Oh also, by the way, your answers are only correct if the prior is correct. If you just pull it out of thin air because it feels right, you can be way off."

    In a sense both of these partisans are correct in their criticisms of each others' methods, but I would urge you to think mathematically about the distinction. There don't need to be Bayesians and frequentists any more than there are realnumberists and integeristos; there are different kinds of methods that apply math to calculate different things. This is a complex subject with a lot of sides to it, of which these examples are a tiny part -- books on Bayesian analysis could fill many bookshelves, not to mention classical statistics, which would fill a whole library.

    ------------

    Here's an extended example that shows the difference precisely in a discrete example.

    When I was a child my mother used to occasionally surprise me by ordering a jar of chocolate-chip cookies to be delivered by mail. The delivery company stocked four different kinds of cookie jars -- type A, type B, type C, and type D, and they were all on the same truck and you were never sure what type you would get. Each jar had exactly 100 cookies, but the feature that distinguished the different cookie jars was their respective distributions of chocolate chips per cookie. If you reached into a jar and took out a single cookie uniformly at random, these are the probability distributions you would get on the number of chips:
    A type-A cookie jar, for example, has 70 cookies with two chips each, and no cookies with four chips or more! A type-D cookie jar has 70 cookies with one chip each. Notice how each vertical column is a probability mass function -- the conditional probability of the number of chips you'd get, given that the jar = A, or B, or C, or D, and each column sums to 100.

    I used to love to play a game as soon as the deliveryman dropped off my new cookie jar. I'd pull one single cookie at random from the jar, count the chips on the cookie, and try to express my uncertainty -- at the 70% level -- of which jars it could be. Thus it's the identity of the jar (A, B, C or D) that is the value of the parameter being estimated. The number of chips (0, 1, 2, 3 or 4) is the outcome or the observation or the sample.

    Originally I played this game using a frequentist, 70% confidence interval. Such an interval needs to make sure that no matter the true value of the parameter, meaning no matter which cookie jar I got, the interval would cover that true value with at least 70% probability.

    An interval, of course, is a function that relates an outcome (a row) to a set of values of the parameter (a set of columns). But to construct the confidence interval and guarantee 70% coverage, we need to work "vertically" -- looking at each column in turn, and making sure that 70% of the probability mass function is covered so that 70% of the time, that column's identity will be part of the interval that results. Remember that it's the vertical columns that form a p.m.f.

    So after doing that procedure, I ended up with these intervals:
    For example, if the number of chips on the cookie I draw is 1, my confidence interval will be {B,C,D}. If the number is 4, my confidence interval will be {B,C}. Notice that since each column sums to 70% or greater, then no matter which column we are truly in (no matter which jar the deliveryman dropped off), the interval resulting from this procedure will include the correct jar with at least 70% probability.

    Notice also that the procedure I followed in constructing the intervals had some discretion. In the column for type-B, I could have just as easily made sure that the intervals that included B would be 0,1,2,3 instead of 1,2,3,4. That would have resulted in 75% coverage for type-B jars (12+19+24+20), still meeting the lower bound of 70%.

    My sister Bayesia thought this approach was crazy, though. "You have to consider the deliverman as part of the system," she said. "Let's treat the identity of the jar as a random variable itself, and let's assume that the deliverman chooses among them uniformly -- meaning he has all four on his truck, and when he gets to our house he picks one at random, each with uniform probability."

    "With that assumption, now let's look at the joint probabilities of the whole event -- the jar type and the number of chips you draw from your first cookie," she said, drawing the following table:
    Notice that the whole table is now a probability mass function -- meaning the whole table sums to 100%.

    "Ok," I said, "where are you headed with this?"

    "You've been looking at the conditional probability of the number of chips, given the jar," said Bayesia. "That's all wrong! What you really care about is the conditional probability of which jar it is, given the number of chips on the cookie! Your 70% interval should simply include the list jars that, in total, have 70% probability of being the true jar. Isn't that a lot simpler and more intuitive?"

    "Sure, but how do we calculate that?" I asked.

    "Let's say we know that you got 3 chips. Then we can ignore all the other rows in the table, and simply treat that row as a probability mass function. We'll need to scale up the probabilities proportionately so each row sums to 100, though." 

    She did:
    "Notice how each row is now a p.m.f., and sums to 100%. We've flipped the conditional probability from what you started with -- now it's the probability of the man having dropped off a certain jar, given the number of chips on the first cookie."

    "Interesting," I said. "So now we just circle enough jars in each row to get up to 70% probability?" We did just that, making these credibility intervals:
    Each interval includes a set of jars that, a posteriori, sum to 70% probability of being the true jar.

    "Well, hang on," I said. "I'm not convinced. Let's put the two kinds of intervals side-by-side and compare them for coverage and, assuming that the deliveryman picks each kind of jar with equal probability, credibility."

    Here they are:

    Confidence intervals:
    Credibility intervals:
    "See how crazy your confidence intervals are?" said Bayesia. "You don't even have a sensible answer when you draw a cookie with zero chips! You just say it's the empty interval. But that's obviously wrong -- it has to be one of the four types of jars. How can you live with yourself, stating an interval at the end of the day when you know the interval is wrong? And ditto when you pull a cookie with 3 chips -- your interval is only correct 41% of the time. Calling this a '70%' confidence interval is bullshit."

    "Well, hey," I replied. "It's correct 70% of the time, no matter which jar the deliveryman dropped off. That's a lot more than you can say about your credibility intervals. What if the jar is type B? Then your interval will be wrong 80% of the time, and only correct 20% of the time!"

    "This seems like a big problem," I continued, "because your mistakes will be correlated with the type of jar. If you send out 100 'Bayesian' robots to assess what type of jar you have, each robot sampling one cookie, you're telling me that on type-B days, you will expect 80 of the robots to get the wrong answer, each having >73% belief in its incorrect conclusion! That's troublesome, especially if you want most of the robots to agree on the right answer."

    "PLUS we had to make this assumption that the deliveryman behaves uniformly and selects each type of jar at random," I said. "Where did that come from? What if it's wrong? You haven't talked to him; you haven't interviewed him. Yet all your statements of a posteriori probability rest on this statement about his behavior. I didn't have to make any such assumptions, and my interval meets its criterion even in the worst case."

    "It's true that my credibility interval does perform poorly on type-B jars," Bayesia said. "But so what? Type B jars happen only 25% of the time. It's balanced out by my good coverage of type A, C, and D jars. And I never publish nonsense."

    "It's true that my confidence interval does perform poorly when I've drawn a cookie with zero chips," I said. "But so what? Chipless cookies happen, at most, 27% of the time in the worst case (a type-D jar). I can afford to give nonsense for this outcome because NO jar will result in a wrong answer more than 30% of the time."

    "The column sums matter," I said.

    "The row sums matter," Bayesia said.

    "I can see we're at an impasse," I said. "We're both correct in the mathematical statements we're making, but we disagree about the appropriate way to quantify uncertainty."

    "That's true," said my sister. "Want a cookie?"

    Q: Did MIT's decline (outside of biology and life sciences) begin shortly before WW2?

    The democratization of electronics -- made possible by transistors and integrated circuits, among other things -- surely contributed to a decrease in relative prominence for MIT, but the institution and people in close orbit had many groundbreaking accomplishments in electrical engineering and computer science since World War II.

    Looking at EECS only, consider MIT's dominant postwar role in:
    • Magnetic core memory
    • Navigating to land on the moon (http://www.technologyreview.com/...)
    • Chaos theory and the "butterfly effect" (which earned Edward Lorenz the Kyoto Prize in 1991)
    • Time-sharing and operating systems (Corbato won the Turing Award in 1990)
    • Artificial intelligence and neural networks (e.g., Minsky's groundbreaking work)
    • Object-oriented programming, information hiding and abstraction (considering, e.g., Liskov's 2008 Turing award and 2004 von Neumann medal)
    • RSA
    • GNU
    • X
    • The packet-switched Internet (consider, e.g., Bob Kahn's Turing award in 2004)
    • LOGO
    • E-Ink
    • The spreadsheet (Bricklin and Frankston's VisiCalc)
    • High-definition digital television (including the work of Lim and Schreiber, and MIT's role as one of four voting seats on the Grand Alliance)
    • Languages and automata (e.g., Chomsky's work)
    • Information theory and coding, including Shannon's revolutionary master's thesis in the 30s and his work as an MIT professor from the 50s on
    • The rise of "hacker culture" (see Steven Levy's "Hackers") and the digital video game ("Spacewar!", much later "Rock Band" and "Guitar Hero")
    • Programming languages, including McCarthy's LISP (still used more than 50 years later)

    Tuesday, June 8, 2010

    Q: What are all the terms used when negotiating with reporters about sourcing and attribution?

    Forget the jargon -- the important thing is to reach a meeting of the minds with the reporter about the conditions imposed on the interview or information, BEFORE you give the interview or share the information. The reporter needs a chance to agree or disagree with the proposed terms and you both need to have the same understanding so there are no surprises later. A surprise will not be in your favor so it's in your interest to get clarity.

    The general rule in American print journalism is that a reporter always has to identify themselves as a reporter, and from that point on, you are "on the record" unless you reach an agreement otherwise. That means everything can be attributed to you by name.

    Regarding the terms -- Generally speaking, information delivered "on background" might be directly quoted (meaning a verbatim quotation inside quotation marks) and attributed to "a source close to FooCorp" or "a FooCorp executive." Information delivered "on deep background" might be paraphrased and attributed to "people familiar with the matter" (this is a favorite Wall Street Journal phrase) or "according to various estimates" or simply inserted in the article without attribution.

    Information that's "off the record" isn't supposed to appear in the article at all unless given by another source -- but without a firm understanding otherwise, the reporter may use the "off the record" information to try to pump other sources to confirm it. This could easily make it known to the other sources that you were the original source of the information.

    Every reporter and publication has a slightly different understanding of what is and isn't permissible with "background," "deep background," "not for attribution" and "off the record" material, so again, better to get a real meeting of the minds than to use the jargon and be surprised later.