Beware of HELOC & 2nd Mortgage Traps on Refinancing

I found this article today on Money Musings about the pitfalls of trying to refinance your mortgage when you have a 2nd or HELOC on the house:

A significant number of my personal acquaintances purchased homes (newer, larger) within the last several years. Inevitably, they were also convinced that financing via an 80/20 first/second mortgage setup was the way to go. Doing so is “financially smart,” because it allows them to avoid paying private mortgage insurance.

It’s an idea that works … until it doesn’t. Consider this Baltimore resident’s story, for instance:

Baltimore Sun: “Some Lenders Block Refi Ability”

He needs to refi out of his nasty ARM first mortgage — he’s lucky, in that he does have decent equity in his home — but his second-mortgage holder won’t agree to a re-subordination.

Under any circumstances.

I think the 80/10/10 is more common here in the Bay Area, or at least was, back in 2003/2004.  The 80/10/10 is  80% first mortgage, 10% HELOC, and 10% down payment.  No mortgage ensurance, and you get a HELOC which can be useful if you need to tap assets for some reason.

This is a pretty good example of how liquidity in a market like mortgages which isn’t centrally brokered can quickly jam up.

I’ve also seen stories lately of banks literally calling due their HELOC loans with fairly short notice.  Seems to be tied to people who are underwater on their houses (debt is greater than value of house). Not a good thing if you don’t have the liquidity to cover the outstanding balance, or if you were depending on your HELOC as an emergency fund.

Another lesson on why, in the end, liquidity can be one of the most important aspects of personal finance.   People tend to focus on rates of return, which of course, is a good thing to focus on.  But when you need money, it’s amazing how rates of return give way to the simple ability to tap assets for cash.

Why Everyone In My Family Has Blue Eyes, Except Me

Today, I discovered “The Spitoon“, the blog from 23andMe, the company dedicated to personal genomics.   Really interesting material.  I found this article particularly eye-catching:

SNPwatch: One SNP Makes Your Brown Eyes Blue

I’m curious about this, of course, because while I have green eyes, my wife Carolyn & my two sons have blue eyes.  It seems that this isn’t even due to a single gene – it’s literally a single nucleotide pair.  From the article:

Three recently published papers (here, here, and here) report that a single SNP determines whether a person’s eyes will be blue; every blue-eyed person in the world has the same version. The findings also suggest that the blue-eyed version of the SNP can be traced back to a single ancestor that lived about 6,000 to 10,000 years ago.

It’s been known for a while that eye colors like green and hazel (deviations from the brown color found in the majority of people) can be explained by SNPs in a gene called OCA2. The protein made by this gene is involved in the production of melanin, a pigment found in the cells of the iris. This is the same pigment that gives your hair and skin their color. Darker eyes have more melanin than lighter colored eyes.

But none of the known variations in OCA2 could explain blue eyes. The new research seems to have solved the mystery. A SNP near OCA2, but not in it, determines whether a person will have blue eyes.

The SNP, rs12913832, is actually in a gene called HERC2. Scientists think that instead of affecting HERC2, the SNP controls how much protein will be made from the nearby OCA2 gene. Low levels of OCA2 protein, caused by the G version of the SNP, lead to lower levels of melanin, which in turn leads to blue eyes. 23andMe customers can check their genotype at this SNP in the Genome Explorer or in the Gene Journal (Note: In the Gene Journal you’ll see other SNPs also associated with eye color. The combination of these SNPs with the blue-eyed version of rs12913832 can end up giving a person green eyes instead of blue).

What a great blog.  Sign me up for that feed.

As a side note, Michael Arrington has posted his account info from 23andMe on TechCrunch, so you can live vicariously through him in case you are short $1000.  I have to admit, seeing those results makes me jealous – I’d love that kind of genetic detail on myself & my family members.

How to Delete Individual Backups from Apple Time Machine

Some of my most popular blog posts, over time, have been tips & tricks I’ve posted about how to get certain things done on the Mac.  My rule of thumb for these posts is simple – if I get stumped about how to get something set up, and then after an hour of searching I find the answer, I share it here.  My hope is that I’ll save other people that hour of searching.

This post is about a question I had today:

How to delete individual backups from Time Machine?

The problem I had was that the 1TB drive I have for Time Machine backups was full.  Now, Time Machine is very good about deleting the oldest backups on an ongoing basis to manage space.  But what if you just “need” extra space on that drive?  In my case, I needed to free up about 200GB so I could copy over some files, temporarily, from a drive I was retiring.

Time Machine has a very unique UI.  No menu bar, so no obvious place to click “delete a backup”.  I looked everywhere.  I clicked through to individual backups, but could see any button that said “remove” or “delete”.

Then I found this Mac OS X Hint from the always helpful macosxhints.com.

Turns out, when Time machine presents you with the “Finder-like” interface to your drive, it changes, subtley, the menu-items of the “gears” menu on the window.  I say subtley because, of course, there is no visual indication that the “gears” menu has different menu items in this context.

One of those menu items is “Delete Backup”.

So, to delete a full backup, you just do the following:

  1. Navigate to the date you want to delete.  In my case, I wanted to delete my oldest backup, from 1/30/2008.
  2. Navigate in the Finder window to your overall machine.  In my case, it’s called “Powersmash G5”, where I have 2 internal drives that are backed up.
  3. Select the “Gear” menu, and select “Delete Backup”
  4. Enter the admin password for the Finder, if it asks.

My guess is that Apple wasn’t trying to make this hard – they are just suffering from a non-standard interface, and then an overloading of that “gears” menu, which I’m sure is theoretically supposed to be a “contextual menu”.  For me, a menu that showed on on right-click of either the finder window itself or the Time Machine backup marker on the right would have been more obvious to me.

Hope this tip is useful to someone.  It sure helped me today.

Vanguard Cuts ETF Fees… Again

Vanguard announced this week yet another reduction in ETF fees on some of their major funds:

Earlier this month, Vanguard shaved its fees on four of its popular ETFs. Those were:

  • Growth ETF (AMEX: VUG), from 0.11% to 0.10%.
  • Value ETF (AMEX: VTV), from 0.11% to 0.10%.
  • Small-Cap Growth ETF (AMEX: VBK), from 0.12% to 0.11%.
  • Small-Cap Value ETF (AMEX: VBR), from 0.12% to 0.11%.

Also, the new Europe Pacific ETF (AMEX: VEA) wound up the year at 0.12%. The fund opened last July and was expected to assess expenses of around 0.15%.

“We originally estimated an annualized expense ratio at higher levels,” said Rebecca Cohen, a Vanguard spokesperson. “But after the year closed out, expenses wound up being less than originally estimated.”

While relatively tiny moves, the latest changes further distances Vanguard’s ETF lineup from the pack. It also brings to 18 the number of different ETFs that Vanguard has cut expense ratios on within the past four months.

The flurry of cost-cutting leaves Vanguard with an average expense ratio at 0.16%. Through year-end 2007, Lipper data showed an average ETF in the U.S. with an expense ratio of 0.53%.

“As ETFs grow in size, they generally become more efficient to run,” said Vanguard in a statement.

As a shareholder-owned company, Vanguard says its “policy has always been to pass the savings from those efficiencies through to investors. The new expense ratios reflect the lower costs of managing these products.”

This is why I am such a loyal customer of Vanguard and Vanguard financial products.  Their entire brand promise is around minimizing management costs for investors, and as a result, they proactively reduce rates constantly.  Unlike other institutions that use low fees as a short term “loss leader” to bring in assets, Vanguard genuinely strives for the lowest costs structure, and passes those savings on to their investors.

The idea that you can now buy an index of small-cap, domestic, growth companies for 11 basis points a year is just amazing.  11 basis points!  That means if you had $10,000 invested, the annual overhead cost would be just $11.   And that’s for a fairly focused index – I believe the broad based US domestic stock index ETF from Vanguard is down to just 7 basis points!

When at all possible, I tend to go with the Vanguard index ETF/Fund.  In fact, since many brokerages (like Fidelity) charge exorbitant commissions on the Vanguard funds,  you can now just buy the ETFs like any other stock.  Pay a cheap commission once, and pay cheap expenses for decades.

Hard to beat a great product with a great cost from a great firm.  Hard to beat.

To Know Elliot is to Love Elliot

We just got through an amazing launch last night at LinkedIn.  New homepage, new site-wide navigation, new Status feature, and countless other small enhancements.

This is a very funny blooper reel that Elliot & Mario recorded while trying to film the video to launch the new Homepage.  It was so funny, they played it for the entire company at lunch yesterday.  He has become cult phenomenon.

I can’t tell you how proud I am to have Elliot on our team.  Enjoy.

Inflation, Hedonics, and How Silicon Valley May Have Wrecked Our Monetary Policy

I read a really interesting book on my trip to Boston last week.  It’s called Greenspan’s Bubbles: The Age of Ignorance at the Federal Reserve, by William Fleckenstein.  I’ve read Bill Fleckenstein’s columns on-and-off since 1999, when I found him through Herb Greenberg.  He’s definitely an intelligent guy, and while he presents like a perma-bear, the reality is that he’s really just a very strong, traditional, bottoms-up fundamentals-based valuation guy.

He has a real axe to grind in this book, but I’m going to do a book review in a separate post.  However, one of the topics he raised was so interesting to me, I had to write a post about it.

Summary: I think we seriously messed up our monetary policy in the 1990s.

To be most specific, I think that in the 1990s, we made a fundamental change to the way we track inflation statistics for the United States that on the surface seems logical.  But unfortunately, the realities about the economics of computers are so extreme, they may have completely distorted the inflation numbers for the entire country.  And if you distort the inflation numbers for the entire country, you run the risk of distorting the monetary policy of this country.  In fact, if you seriously mess up inflation calculations, you also affect fiscal policy, social benefit policy, and even global economic stability.

Yeah, it could be that big.

OK, here’s the information from the book that got me thinking.  It starts on Page 39, in the chapter called, “The Bubble King”.  Fleckenstein explains three changes that were made to the way the US calculates consumer price inflation (CPI) in 1995:

  • Change 1: Move from Arithmetic to Geometric Rates.   Ok, this one is perfectly legitimate.  After all, inflation rates compound year to year, so calculating the rate as a geometric progression is fundamentally correct.  I was actually shocked to find out we didn’t do this before, frankly.  True, at low percentages, arithmatic and geometric calculations don’t always vary alot, but they do vary, and geometric is absolutely the right way to calculate the number.

    For those of you asking what the difference is, let’s use this example.  Say over 5 years, the price of milk goes up 50%.  Arithmetic calculation would say 50%/5 = 10% per year.  The problem, of course, is that if you actually raise the price by 10% per year, you get a lot more than 50% because the price increases compound each year.  In Year 1, you’d go from $1.00 to $1.10, and in Year 2, you’d go to $1.21.  By Year 5, you’d be at $1.61, not $1.50.  It’s just like compounding interest in your savings account.   Geometric calculations take this into account.  Instead of 10%, they would say the inflation rate was 8.45%, which over 5 years compounds to 50%.

    Doing this lowers the number reported, but it’s fundamentally the correct number to report on an annual basis.  So far, so good.

  • Change 2: Asset Substitution.  This one is a little murkier.  Basically, the way that economists calculate inflation for consumer goods is that they take a representative sample of products – hundreds of them.  They then track the prices for these products each year.  If you’ve ever seen those funny articles that track the “price index of the 12 days of Christmas” every year, you get the idea.  🙂

    Asset substitution covers the case where similar goods might be substituted by people if one rises in price more than the other.  Inflation is lower for the person, because instead of buying the high priced item, they buy the lower priced item.  For example, let’s say the basket of goods included a 12-ounce can of soda.  If the price of soda skyrocketed for some reason, most people would not actually spend the money, but would drink less soda and more water.  The extent to which that substitution happens means that the inflation rate is actually lower for people, because they don’t feel the full impact of the rise in price of soda.

    Fleckenstein argues that this change was “truly absurd.” Like a lot of the analysis in the book, that’s a significant exaggeration.  The truth is, the fundamental need for substitution is sound.  But like any of these economic techniques, if abused, this type of power could lead to incredibly huge errors in the calculation of inflation.

  • Change 3:  Hedonic Adjustments.  OK, this is the one that has me worried.  The CBO describes these as “quality adjustments”.  Once again, the logic behind them is sound.  It’s the execution that’s troubling.  Hedonic adjustments account for the fact that if you improve the quality and features of one of the items in the basket of goods, the price might rise due to that increase in feature set, not inflation.  For example, if in 2001 a Honda Civic has 145 horsepower, and in 2004 a Honda Civic has 160 horsepower, then the 2004 Honda Civic actually has 10% more horsepower than the 2001 version.  To the extent that people pay for horsepower, the inflation numbers are adjusted to reflect that part of the price increase in the Honda Civic is due to increase in function, not just inflation.

    Like asset substitution, this could easily be abused, since it involves a judgement call – how much has the product improved vs. how much has the price just risen due to inflation.  It’s a hard line to draw, especially since in 2004 there are no new 145 horsepower Honda Civics around for an apples-to-apples comparison.

So, now that you’ve gotten your fill of Macroeconomics for the day, here’s the part where we may have wrecked our monetary policy.

Moore’s Law

Well, it’s not just Moore’s Law.  It’s the pace of product improvement in the high tech industry, specifically hardware.  It’s huge.  It’s unbelievable.  There has never been a manufactured good like it.  There has never been a manufactured product, like the computer, that doubles in capability every 18 months.  Hard drives double in size.  I bought a 40MB external hard drive in 1993 for $200.  I just bought a 1TB drive for the same price last month.  That’s a 24,900% increase in storage for the same price in 15 years.

Try feeding that through “Hedonic Adjustment” and see what you get.  A huge deflationary element.

Now, that wouldn’t matter, except for one thing:  computers have become a decently large chunk of the US economy.  Not huge mind you.  The US economy is now over $13 Trillion.  Computers are lucky to make up 2-3% of that.  But 2-3% is actually a big number when you start feeding through it ridiculous improvements in “quality/features per dollar”.

Let me jump to page 101 of the book, in the chapter called “The Stock Bubble Bursts”:

James Grant, editor of the always insightful Grant’s Interest Rate Observer, was one skeptic who took the trouble to dissect the complicated subject that Greenspan seemed to accept at face value.  In the spring of 2000, Grant published a study by Robert J. Gordon, a Northwestern University economics professor, who had prepared for the Congressional Budget Office a paper with a shocking revelation:

There has been no productivity growth acceleration in the 99% of the economy located outside the sector which manufactures computer hardware… Indeed, far from exhibiting a productivity acceleration, the productivity slowdown in manufacturing has gotten worse: when computers are stripped out of the durable manufacturing sector, there has been a further productivity slowdown in durable manufacturing in 1995-99 as compared to 1972-95, and no acceleration at all in nondurable manufacturing.

Grant backed that thunderbolt up with another study conducted by two economists, James Medoff and Andrew Harless.  Their contention was that the use of a hedonic price index grossly misrepresented the actual data.

This is bad news.  Bad bad news.

In case you are wondering, the fundamental question that our Federal Reserve and other governmental agencies concerned with the US economy ask themselves is how much of the growth in the economy is due to three factors:

  • Population growth
  • Productivity growth
  • Inflation

If our calculation of inflation is off, it drastically changes our calculation for productivity.  Productivity is the measure of how much economic value is generated from one time-unit of work.  The 1990s were largely heralded as a decade of re-invigorated productivity growth.  It’s why some people think Robert Rubin (or Bill Clinton) were great.  It’s why people believed in a new economy driven by technological progress.

The data above is disturbing.  Yes, it confirms that high tech might have had phenomenal impact on our aggregate numbers.  But it’s totally misleading if it turns out that 98% of the economy was not, in fact, seeing productivity growth.  Worse, it’s possible computers were actually masking continued weakness in every other area.

Look, I’m fairly sure that the people responsible for collecting this data are intelligent, and that this issue has likely been raised already.  It’s also possible that this book and its citations are already known and discredited.

Still, I’m left with the following thoughts:

  1. Is the above data true?  If so, does this mean the 1990s were not, in fact, a real productivity boom for the economy overall?
  2. If these issues are true and known, is the Federal Reserve, Treasury, Congress, et al taking this into account when they make monetary and fiscal policy decisions?  If inflation is understated, then interest rate cuts, fiscal stimulus, and  whole host of other policy decisions could be disasterous.  We could end up with HUGE inflation in everything except computers to make the numbers balance.  (I feel like this is like that line from “The Matrix Reloaded” – the system is desperately trying to balance the equation)
  3. When they make hedonic adjustments for computers, do they take actual utility into account?  Sure, today’s Windows PC is 3x faster than one from five years ago, but the latest versions of Windows & Office are much more resource intensive than five years ago too.  My Mac Plus booted faster than my PowerMac G5.  How do they measure the hedonic adjustment for computers?  Are they grotesquely over-estimating the increased value from hardware improvements, without discounting the resource requirements of software to provide equivalent “utility”?

Feel free to comment if you have pointers to information either confirming or refuting the above issues.  This hits home for me as an issue that ties together two of my strongest personal interests – computers & economics.

Also, feel free to post this blog URL to other boards or forums where experts might be able to answer some of the above questions.

Karl Wiley Joins Motif as President of US Operations

Caught this on my Google News Alert today from PRLog:

Motif, Inc., a leading global knowledge-based BPO services provider announced today that Karl Wiley has joined the company as President of U.S. operations. Mr. Wiley will be responsible for all of Motif’s U.S. based operations, including corporate strategy, sales & marketing, key account management and M&A. He will be focused on driving accelerated growth for Motif by attracting new clients, expanding into additional industries and service lines, and growing activity from Motif’s current client base.

Mr. Wiley joins Motif after more than six years as an executive with eBay. Most recently he served as the Chief Operating Officer of MicroPlace, eBay’s start-up initiative providing a retail investment marketplace in the Microfinance industry. Prior to that, he was the general manager of eBay’s $5+ billion Technology and Media categories, and led eBay’s B2B wholesale initiative. In these roles, Mr. Wiley was responsible for strategy, consumer marketing, product management and customer service, and managed eBay’s relationship with many major branded retailers and manufacturers.

Karl was one of the great eBay Category Managers.  I first worked with Karl when he was part of the Business & Industrial team, which turned out to be an incredible pool of leadership talent.  At the time, Karl was the primary driver & business sponsor for product support for wholesale lots at eBay. For me, it was one of the first projects where I felt like I was truly working on features that were driven by the eBay selling community itself, and not from just internal motivation.  I learned a lot from my efforts with the B&I team, and even after the category management for wholesale lots was disbanded, I still ended up leading the course on Buying & Selling in Lots at eBay Live in 2004 & 2005.  Packed rooms, both times.

Congratulations, Karl, and best of luck with your new venture.

One minor quip, of course, is that it’s time to update your LinkedIn profile

Apple: Feature Requests for AppleTV and/or FrontRow

I’m not sure if anyone from Apple is reading this post, but hope springs eternal.

Listen, I love my AppleTV.  Every week I convert more and more of my movies to MP4 and add them to iTunes.  And I love the fact that FrontRow 2.0 in Mac OS X 10.5 (Leopard) is basically the AppleTV software.  Beautiful.  My Mac Mini, with a 500GB USB 2.0 drive, is an AppleTV on steroids.  Perfect.

I now have over 200 hours of movies and TV ripped for my two Apple boxes.

But there is a problem.  Two, really.  I need to request two key feature additions for the next dot-release of AppleTV and FrontRow:

Request 1: Please add video playlists

This should be obvious, because you already have them on the iPod, but I really need these on the AppleTV.  I have a lot of ripped TV shows and cartoons that are only 5-20 minutes in length.  What I want to do is arbitrarily create a video playlist, and have the AppleTV play continuously some number of shows in a row.  Right now, the device has no playlists.  So either I have to rip a customer MP4 of different combinations, or I have to actually manually play each show individually.

Example: School House Rock

I ripped this DVD.  37 Episodes, each 3 minutes.  Of course, I could not possibly rip the individual shows given the current AppleTV interface.  Instead, I ripped a full two-hour block of all 37 episodes back-to-back.  I then ripped smaller, 8 or 9 episode sequences based on topic.

What I should have been able to do is:

  1. Rip each 3 minute short as a separate file
  2. Create playlists of different groups and sequences of those segments

A lot of great video content is short, and makes sense to view in playlists.  Ripping different combinations into a single video file is wasteful, and clutters the interface.

Request 2: Folders

This one is easy because Tivo already figured this out with version 4.0 of their interface (they are now on 9.x I think).

Let me create folders to group together content so that I don’t have a linear list that goes on forever.  For example, if I have all 6 Star Wars movies, let me create a folder for them.  If I have 10 hours of Band of Brothers, let me group it together.

Right now, the only “grouping” functionality is through the TV Shows interface.  Frankly, that’s pretty clunky.  I’m not even sure I like breaking apart video into movies and TV shows.  I certainly don’t have that breakdown on my Tivo, and I’m not sure I like it.  I’d rather just see TV Shows as a folder of video, sorted by season, then episode, kind of like music that is broken down by artist, and then album.

True, I wouldn’t mind dynamic grouping based on tagged elements of the movies, but that’s actually overkill for now.  I’d settle for good, old-fashioned, manual folders.  A simple directory structure could help me scale the current interface with the remote to handling hundreds of movies, instead of dozens.

So, if you are listening Apple, help me out here.  Thanks.

The Limits of Quantum Computers by Scott Aaronson

I had a business trip to Boston this past week, which means I got a lot of good reading hours in on the plane ride across the country.  As a result, expect to see some intellectually inspired posts this week.

Tonight, I’m going to start off with an easy one – the most recent issue of Scientific American.  It is a great issue.

Actually, three of the articles were blog worthy.  Tonight, I’m going to highlight the great piece by Scott Aaronson called “The Limits of Quantum Computers“.

Here is a synopsis, from the top of the article:

  • Quantum computers would exploit the strange rules of quantum mechanics to process information in ways that are impossible on a standard computer.
  • They would solve certain specific problems, such as factoring integers, dramatically faster than we know how to solve them with today’s computers, but analysis suggests that for most problems quantum computers would surpass conventional ones only slightly.
  • Exotic alterations to the known laws of physics would allow construction of computers that could solve large classes of hard problems efficiently. But those alterations seem implausible. In the real world, perhaps the impossibility of efficiently solving these problems should be taken as a basic physical principle.

Nah, I don’t think that does it justice.

I’ve been following Quantum Computing off-and-on since the mid-1990s.  I took my first Automata & Complexity course at Stanford (CS 154, from Rajeev Motwani) back in 1995.  One of the truly mind-opening courses in the Computer Science undergrad.  Recognizing that there are mathematical frameworks to not just solve problems, but to describe their complexity is fascinating.

Quantum Computing is fascinating because it takes advantage of the truly strange physics of entanglement, a state in Quantum Mechanics where particles can share a matching, but unknown, fate.  A separate branch of algorithmic mathematics has sprung up around analyzing what types of problems, if any, would be simpler to solve on the basis of a computer that leveraged these “Quantum Bits” or QuBits, for short.  At the same time, molecular scientists have struggled to make progress building very small quantum computers.

To date, there are a small number of algorithms that Quantum Computers have been proven to be able to solve significantly more efficiently than traditional computers.  Interestingly, most of them revolve around factoring, which happens to be the one area that we base most of our security algorithms around.  It turns out that factoring a very large number into two primes is very difficult for normal computers, but very easy for quantum computers.

I don’t think I can summarize an 8-page detailed article here, but let’s just say that in this short article, Aaronson manages to:

  • Give a high level overview of basic complexity theory
  • Give a background on what Quantum Computing is, generally
  • Give a background on what makes Quantum Computing different, algorithmically
  • Give examples of the types of problems that QC will significantly improve
  • Give examples of the types of problems that QC will not significantly improve
  • Give interesting mathematical & physics implications of QC algorithmic theory
  • Intersperse the above with incredibly useful diagrams and drawings

Here is my favorite chart in the article – a simple one that maps the changes that quantum computing introduce in the world of algorithmic complexity:

And that’s just one of the sidebars!  🙂  It’s interesting to note that, after scanning this, I discover from Scott’s blog that he had to fight to get that diagram included!

The complexity class inclusion diagram on page 67 was a key concession I did win. (Apparently some editors felt a Venn diagram with P, NP, BQP, and PSPACE would be way too complicated, even for readers who regularly gobble down heaping helpings of M-theory.) As you can imagine, exposing people to this stuff seemed pretty important to me: this is apparently the first time P, NP, and NP-completeness have been explained at any length in Scientific American since articles by Knuth and by Lewis and Papadimitriou in the 1970’s.

Much appreciated, Scott.

Scott Aaronson has his own blog:
http://www.scottaaronson.com/blog

and he also runs an online encyclopedia for complexity classes:
http://www.complexityzoo.com

And to think, I was just at MIT and missed the chance to meet him. 🙂

The article is not yet fully online, but if you have a chance, I highly recommend picking up a copy of the issue.  Scott has posted an early draft of his article, in PDF, here.  Or better yet, subscribe.  It really is the one scientific magazine to subscribe to if you want to keep up-to-date on a broad range of scientific discovery.

Get LinkedIn on Your iPhone, Now!

It’s live, it’s live!  After weeks in beta, it’s LIVE!

This is just the first release, but already you can:

  • Search LinkedIn from your iPhone
  • View all your contacts and their full profiles, from your iPhone
  • Invite new people you meet, from your iPhone
  • Browse your network updates, from your iPhone

What are you waiting for?  You should immediately:

  1. Use your iPhone to go to: http://iphone.linkedin.com
  2. Hit the (+) button in the middle bottom control bar on your iPhone Safari.  This adds LinkedIn to be one your your default web clippings on your iPhone desktop.  An absolutely gorgeous “IN” logo will grace your iPhone.

For those of you who don’t have an iPhone, this is actually the same URL that serves http://m.linkedin.com, our general mobile application URL.  Of course, if you don’t have an iPhone yet, you might want to just add that step above (1) above.

I installed the beta of this application on my wife’s iPhone, and I play with it incessently when we’re on the road.  It’s completely addictive.

Brought to you, with love, from a major web company that develops it’s site exclusively on Mac OS X.

The LinkedIn Wizard is Out!

It’s Thursday night, and in Mountain View right now, great new features and enhancements are rolling out, as usual, to the LinkedIn website.

There is much lore about the origins and purpose of the LinkedIn wizard, but you only see him on Thursday nights, when we’re doing a release that requires downtime.

Right now, I’m in Boston, so I’m missing the release and the dinner and the Rock Band sessions that go with it.  But fortunately, the Wizard is there, to help me feel a little closer to home.

The Subprime Primer in Stick Figures

Two email forwards tonight as posts.  My apologies.

This one will appeal to you finance fanboys out there.

The Subprime Primer

This is a 2.4MB PowerPoint presentation that walks through the basics of the Subprime crisis.  It’s extremely funny, if you are into stick figures that use foul language.  It definitely wins the award for best use of a Norwegian stick figure swearing in a PowerPoint document.  (I will consider others for the award, if you post links.)

Yes, please don’t download if you are offended by any of the seven word banned by the FCC on radio.  And yes, if you watch Deadwood on HBO, you will be more than OK with this deck.

The Worst Appetizer: The Chili’s Awesome Blossom

My wife forwarded a really fun article:

Divine Caroline: The Worst Artery Cloggers in America

It’s a sick type of voyerism, I know.  Dietary rubbernecking.  You look at the nutritional stats and you go, “How could anyone eat one of these things?!?”

Well, in all fairness, I do believe that in younger days I did actually eat one of these: The Chili’s Awesome Blossom.  Check it out:

Worst Appetizer: Chili’s Awesome Blossom
2,710 calories, 203 grams fat

Chili’s is all sorts of wrong. The one and only time I ate there, I almost dove over the table and made the waitress give me my money back, it was so bad. It’s clear now I was just in a salt and sugar-induced rage. Even with their crappy food laden with sodium, fat, and emulsifiers, you’ve got to admire them. How they turned a simple onion into a day’s worth of calories and three days worth of fat is a miracle of food science.

Come on, that’s fairly amazing.  It’s not easy to hide 2710 calories in an appetizer.  Really, it isn’t.  Sure, you could serve someone a bowl of 203g of lard, but would they really eat it (that’s 40.6 teaspoons of Crisco, for those of you counting out there.)  I think it’s a fairly impressive achievement.  Given that it has been on the menu for over 10 years, it must be fairly successful.

I was somewhat suprised to see a Chipotle item on the menu.  Yes, I know anything called a Super Burrito is unlikely to appear on the Weight Watchers guide to Tex-Mex, but this was a little scathing:

Worst Meal in Tin Foil: Chipotle Barbacoa Super Burrito
1493 calories, 68 grams fat (22.5 saturated), 3,644mg sodium, 151 grams carbohydrates, 144 mg cholesterol

The other day my coworkers and I went to Chipotle. It was all shits and giggles while we were eating—“hey, pass the guac!” “want some more chips?!”—but then about an hour later, things went terribly wrong. Our stomachs hurt. Our mouths puckered. I drank about a gallon of water but couldn’t seem to quench my thirst. The bathroom stunk.

I think it’s ironic that Chipotle sponsors a biking team. No athlete would ever eat there.

You should check out the full article.  Lots of fun.

Psychohistory Reaches 450 Posts

Just a quick milestone for this blog.  It’s been a couple of months since I’ve posted any stats.  I don’t have the patience tonight to go into extreme detail, but here are some high level numbers for Psychohistory to date:

Blog Stats

  • Total Views: 272,308
  • Best Day Ever: 4,536 — Monday, March 26, 2007
  • Posts: 450
  • Comments: 1,312
  • Categories: 35

Here is a nice graph that shows page views, month by month, since I launched this new blog in August 2006:

The big spike in March 2007 was due to a flurry of interest in my Battlestar Galactica posts, which remain to this day the most popular posts on this blog.