Population Statistic: Read. React. Repeat.
Saturday, May 24, 2021

Nothing like living in a patch of disappearing history. Manhattan’s very own Lower East Side has been designated one of the 11 “America’s Most Endangered Historic Places” by the National Trust for Historic Preservation.

I don’t know about the other 10 hotspots (a shoutout there to the Great Falls Portage, presumably closeby to local blogger Dave), but for the LES, it’s already old news:

Christopher Mele, author of “Selling the Lower East Side: Culture, Real Estate, and Resistance in New York City,” said in a phone interview: “It’s a formal recognition of a trend that’s been in place since the ’80s. The local culture has been under prey from the forces of real estate development.”

Mr. Mele’s book argued that the Lower East Side had in fact “been a target of change since the cutoff of immigration in the 1920s.” The pressures on the neighborhood continued through the 1960s, when the loosening of immigration laws prompted a new wave of immigration to the neighborhood and the city, and the term “East Village” was popularized “as a counterpoint to the West Village — a bohemian enclave in the ’40s and ’50s that became a bit too pricey for artists, who then moved east.” And the process really gained steam in the 1980s, Mr. Mele argued, when the name “Alphabet City” became popular.

As for why this corner of the LES is called Alphabet City, notice the names of the streets here: Avenue A, Avenue B, Avenue C, and last but not least, Avenue D. Get it? Hey, a living museum piece should be clearly tagged, shouldn’t it?

by Costa Tsiokos, Sat 05/24/2008 07:43:47 PM
Category: History, New Yorkin'
| Permalink | Trackback | Feedback


Yes, John Dvorak regularly produces a muddled mess of a column for MarketWatch, and this week’s edition is no exception. He jumps haphazardly from some fabricated speculation about AltaVista re-emerging as a search player (maybe he’s not aware that Yahoo! swallowed it up ages ago, and is now steadily erasing the AltaVista brand?) to how much Vista sucks, and back to search via how Google might someday buy Microsoft.

But read down far enough, and you’ll uncover a good nugget:

What is really needed are new and better search engines. To be honest about it, Google, Yahoo and Microsoft all stink.

We all know this is true. Sure, you can find the major and obvious sites with any of them. But seriously try and find, for example, the best knitting site.

Go ahead: Type in the keywords “best knitting site” into Google and tell me which site, out of the 300,000-plus results Google returns, is really the best knitting site. It cannot be done, despite the fact that there must be a best one. A group of knitters might know, or maybe not.

It’s getting more difficult to find anything with a narrow target using any of these search engines. Recently, I was searching for a Barack Obama citation for an article and could not find it on Google; there were too many results to be useful.

While the Google mechanism works great for selling millions of little ads, it’s old-fashioned and already dead, as are the rest of these search engines, which basically are all based on decade-old Web-crawling technologies combined with massive caching.

To do its job, Google has to maintain up-to-date and redundant copies of the entire Internet on its servers. It’s a ridiculous idea.

Even this quickie analysis contains a major flaw: Google doesn’t store “the entire Internet on its servers”. Last stat I saw was that Google’s entire cache could hold only about a third of the Web at any one time — still a very impressive mass of information, but nowhere near everything. (Not to mention that it doesn’t do much to plumb the depths of the Deep Web.)

But other than that, he’s right. Search technology isn’t particularly robust, and that’s because it runs on a primitive concept: Text recognition. Search engines can’t do squat unless a page/site is loaded up with keywords, and accurate keywords at that. That still does the job most of the time, because the Web is still overwhelmingly a written form of media. Even sites that specialize in non-written content are forced to include tags and other text-identifiers in order to be searchable.

The problem comes with SEO techniques, legitimate and not-so-much, that attempt to game the system. The focus on words as flags for what a webpage is “about” makes that system relatively easy to manipulate, and all the algorithms that Google et al devise ultimately can’t get past this fundamental shortcoming.

So what’s the solution? It’d be a neat trick to devise an automated system that can recognize more substantive content than just text. Efforts at digital photo recognition and such are good moves, but to me they seem to lead to the same dead-end focus: One element of what’s on the Web, instead of a way to recognize qualitative Web content. And really, maybe the very structure of the Web prevents the development of better Web search. For now, we’re stuck with what we’re stuck with.

by Costa Tsiokos, Sat 05/24/2008 06:46:38 PM
Category: Internet
| Permalink | Trackback | Feedback