Vanguard Cuts ETF Fees… Again

Vanguard announced this week yet another reduction in ETF fees on some of their major funds:

Earlier this month, Vanguard shaved its fees on four of its popular ETFs. Those were:

  • Growth ETF (AMEX: VUG), from 0.11% to 0.10%.
  • Value ETF (AMEX: VTV), from 0.11% to 0.10%.
  • Small-Cap Growth ETF (AMEX: VBK), from 0.12% to 0.11%.
  • Small-Cap Value ETF (AMEX: VBR), from 0.12% to 0.11%.

Also, the new Europe Pacific ETF (AMEX: VEA) wound up the year at 0.12%. The fund opened last July and was expected to assess expenses of around 0.15%.

“We originally estimated an annualized expense ratio at higher levels,” said Rebecca Cohen, a Vanguard spokesperson. “But after the year closed out, expenses wound up being less than originally estimated.”

While relatively tiny moves, the latest changes further distances Vanguard’s ETF lineup from the pack. It also brings to 18 the number of different ETFs that Vanguard has cut expense ratios on within the past four months.

The flurry of cost-cutting leaves Vanguard with an average expense ratio at 0.16%. Through year-end 2007, Lipper data showed an average ETF in the U.S. with an expense ratio of 0.53%.

“As ETFs grow in size, they generally become more efficient to run,” said Vanguard in a statement.

As a shareholder-owned company, Vanguard says its “policy has always been to pass the savings from those efficiencies through to investors. The new expense ratios reflect the lower costs of managing these products.”

This is why I am such a loyal customer of Vanguard and Vanguard financial products.  Their entire brand promise is around minimizing management costs for investors, and as a result, they proactively reduce rates constantly.  Unlike other institutions that use low fees as a short term “loss leader” to bring in assets, Vanguard genuinely strives for the lowest costs structure, and passes those savings on to their investors.

The idea that you can now buy an index of small-cap, domestic, growth companies for 11 basis points a year is just amazing.  11 basis points!  That means if you had $10,000 invested, the annual overhead cost would be just $11.   And that’s for a fairly focused index – I believe the broad based US domestic stock index ETF from Vanguard is down to just 7 basis points!

When at all possible, I tend to go with the Vanguard index ETF/Fund.  In fact, since many brokerages (like Fidelity) charge exorbitant commissions on the Vanguard funds,  you can now just buy the ETFs like any other stock.  Pay a cheap commission once, and pay cheap expenses for decades.

Hard to beat a great product with a great cost from a great firm.  Hard to beat.

To Know Elliot is to Love Elliot

We just got through an amazing launch last night at LinkedIn.  New homepage, new site-wide navigation, new Status feature, and countless other small enhancements.

This is a very funny blooper reel that Elliot & Mario recorded while trying to film the video to launch the new Homepage.  It was so funny, they played it for the entire company at lunch yesterday.  He has become cult phenomenon.

I can’t tell you how proud I am to have Elliot on our team.  Enjoy.

Inflation, Hedonics, and How Silicon Valley May Have Wrecked Our Monetary Policy

I read a really interesting book on my trip to Boston last week.  It’s called Greenspan’s Bubbles: The Age of Ignorance at the Federal Reserve, by William Fleckenstein.  I’ve read Bill Fleckenstein’s columns on-and-off since 1999, when I found him through Herb Greenberg.  He’s definitely an intelligent guy, and while he presents like a perma-bear, the reality is that he’s really just a very strong, traditional, bottoms-up fundamentals-based valuation guy.

He has a real axe to grind in this book, but I’m going to do a book review in a separate post.  However, one of the topics he raised was so interesting to me, I had to write a post about it.

Summary: I think we seriously messed up our monetary policy in the 1990s.

To be most specific, I think that in the 1990s, we made a fundamental change to the way we track inflation statistics for the United States that on the surface seems logical.  But unfortunately, the realities about the economics of computers are so extreme, they may have completely distorted the inflation numbers for the entire country.  And if you distort the inflation numbers for the entire country, you run the risk of distorting the monetary policy of this country.  In fact, if you seriously mess up inflation calculations, you also affect fiscal policy, social benefit policy, and even global economic stability.

Yeah, it could be that big.

OK, here’s the information from the book that got me thinking.  It starts on Page 39, in the chapter called, “The Bubble King”.  Fleckenstein explains three changes that were made to the way the US calculates consumer price inflation (CPI) in 1995:

  • Change 1: Move from Arithmetic to Geometric Rates.   Ok, this one is perfectly legitimate.  After all, inflation rates compound year to year, so calculating the rate as a geometric progression is fundamentally correct.  I was actually shocked to find out we didn’t do this before, frankly.  True, at low percentages, arithmatic and geometric calculations don’t always vary alot, but they do vary, and geometric is absolutely the right way to calculate the number.

    For those of you asking what the difference is, let’s use this example.  Say over 5 years, the price of milk goes up 50%.  Arithmetic calculation would say 50%/5 = 10% per year.  The problem, of course, is that if you actually raise the price by 10% per year, you get a lot more than 50% because the price increases compound each year.  In Year 1, you’d go from $1.00 to $1.10, and in Year 2, you’d go to $1.21.  By Year 5, you’d be at $1.61, not $1.50.  It’s just like compounding interest in your savings account.   Geometric calculations take this into account.  Instead of 10%, they would say the inflation rate was 8.45%, which over 5 years compounds to 50%.

    Doing this lowers the number reported, but it’s fundamentally the correct number to report on an annual basis.  So far, so good.

  • Change 2: Asset Substitution.  This one is a little murkier.  Basically, the way that economists calculate inflation for consumer goods is that they take a representative sample of products – hundreds of them.  They then track the prices for these products each year.  If you’ve ever seen those funny articles that track the “price index of the 12 days of Christmas” every year, you get the idea.  🙂

    Asset substitution covers the case where similar goods might be substituted by people if one rises in price more than the other.  Inflation is lower for the person, because instead of buying the high priced item, they buy the lower priced item.  For example, let’s say the basket of goods included a 12-ounce can of soda.  If the price of soda skyrocketed for some reason, most people would not actually spend the money, but would drink less soda and more water.  The extent to which that substitution happens means that the inflation rate is actually lower for people, because they don’t feel the full impact of the rise in price of soda.

    Fleckenstein argues that this change was “truly absurd.” Like a lot of the analysis in the book, that’s a significant exaggeration.  The truth is, the fundamental need for substitution is sound.  But like any of these economic techniques, if abused, this type of power could lead to incredibly huge errors in the calculation of inflation.

  • Change 3:  Hedonic Adjustments.  OK, this is the one that has me worried.  The CBO describes these as “quality adjustments”.  Once again, the logic behind them is sound.  It’s the execution that’s troubling.  Hedonic adjustments account for the fact that if you improve the quality and features of one of the items in the basket of goods, the price might rise due to that increase in feature set, not inflation.  For example, if in 2001 a Honda Civic has 145 horsepower, and in 2004 a Honda Civic has 160 horsepower, then the 2004 Honda Civic actually has 10% more horsepower than the 2001 version.  To the extent that people pay for horsepower, the inflation numbers are adjusted to reflect that part of the price increase in the Honda Civic is due to increase in function, not just inflation.

    Like asset substitution, this could easily be abused, since it involves a judgement call – how much has the product improved vs. how much has the price just risen due to inflation.  It’s a hard line to draw, especially since in 2004 there are no new 145 horsepower Honda Civics around for an apples-to-apples comparison.

So, now that you’ve gotten your fill of Macroeconomics for the day, here’s the part where we may have wrecked our monetary policy.

Moore’s Law

Well, it’s not just Moore’s Law.  It’s the pace of product improvement in the high tech industry, specifically hardware.  It’s huge.  It’s unbelievable.  There has never been a manufactured good like it.  There has never been a manufactured product, like the computer, that doubles in capability every 18 months.  Hard drives double in size.  I bought a 40MB external hard drive in 1993 for $200.  I just bought a 1TB drive for the same price last month.  That’s a 24,900% increase in storage for the same price in 15 years.

Try feeding that through “Hedonic Adjustment” and see what you get.  A huge deflationary element.

Now, that wouldn’t matter, except for one thing:  computers have become a decently large chunk of the US economy.  Not huge mind you.  The US economy is now over $13 Trillion.  Computers are lucky to make up 2-3% of that.  But 2-3% is actually a big number when you start feeding through it ridiculous improvements in “quality/features per dollar”.

Let me jump to page 101 of the book, in the chapter called “The Stock Bubble Bursts”:

James Grant, editor of the always insightful Grant’s Interest Rate Observer, was one skeptic who took the trouble to dissect the complicated subject that Greenspan seemed to accept at face value.  In the spring of 2000, Grant published a study by Robert J. Gordon, a Northwestern University economics professor, who had prepared for the Congressional Budget Office a paper with a shocking revelation:

There has been no productivity growth acceleration in the 99% of the economy located outside the sector which manufactures computer hardware… Indeed, far from exhibiting a productivity acceleration, the productivity slowdown in manufacturing has gotten worse: when computers are stripped out of the durable manufacturing sector, there has been a further productivity slowdown in durable manufacturing in 1995-99 as compared to 1972-95, and no acceleration at all in nondurable manufacturing.

Grant backed that thunderbolt up with another study conducted by two economists, James Medoff and Andrew Harless.  Their contention was that the use of a hedonic price index grossly misrepresented the actual data.

This is bad news.  Bad bad news.

In case you are wondering, the fundamental question that our Federal Reserve and other governmental agencies concerned with the US economy ask themselves is how much of the growth in the economy is due to three factors:

  • Population growth
  • Productivity growth
  • Inflation

If our calculation of inflation is off, it drastically changes our calculation for productivity.  Productivity is the measure of how much economic value is generated from one time-unit of work.  The 1990s were largely heralded as a decade of re-invigorated productivity growth.  It’s why some people think Robert Rubin (or Bill Clinton) were great.  It’s why people believed in a new economy driven by technological progress.

The data above is disturbing.  Yes, it confirms that high tech might have had phenomenal impact on our aggregate numbers.  But it’s totally misleading if it turns out that 98% of the economy was not, in fact, seeing productivity growth.  Worse, it’s possible computers were actually masking continued weakness in every other area.

Look, I’m fairly sure that the people responsible for collecting this data are intelligent, and that this issue has likely been raised already.  It’s also possible that this book and its citations are already known and discredited.

Still, I’m left with the following thoughts:

  1. Is the above data true?  If so, does this mean the 1990s were not, in fact, a real productivity boom for the economy overall?
  2. If these issues are true and known, is the Federal Reserve, Treasury, Congress, et al taking this into account when they make monetary and fiscal policy decisions?  If inflation is understated, then interest rate cuts, fiscal stimulus, and  whole host of other policy decisions could be disasterous.  We could end up with HUGE inflation in everything except computers to make the numbers balance.  (I feel like this is like that line from “The Matrix Reloaded” – the system is desperately trying to balance the equation)
  3. When they make hedonic adjustments for computers, do they take actual utility into account?  Sure, today’s Windows PC is 3x faster than one from five years ago, but the latest versions of Windows & Office are much more resource intensive than five years ago too.  My Mac Plus booted faster than my PowerMac G5.  How do they measure the hedonic adjustment for computers?  Are they grotesquely over-estimating the increased value from hardware improvements, without discounting the resource requirements of software to provide equivalent “utility”?

Feel free to comment if you have pointers to information either confirming or refuting the above issues.  This hits home for me as an issue that ties together two of my strongest personal interests – computers & economics.

Also, feel free to post this blog URL to other boards or forums where experts might be able to answer some of the above questions.

Karl Wiley Joins Motif as President of US Operations

Caught this on my Google News Alert today from PRLog:

Motif, Inc., a leading global knowledge-based BPO services provider announced today that Karl Wiley has joined the company as President of U.S. operations. Mr. Wiley will be responsible for all of Motif’s U.S. based operations, including corporate strategy, sales & marketing, key account management and M&A. He will be focused on driving accelerated growth for Motif by attracting new clients, expanding into additional industries and service lines, and growing activity from Motif’s current client base.

Mr. Wiley joins Motif after more than six years as an executive with eBay. Most recently he served as the Chief Operating Officer of MicroPlace, eBay’s start-up initiative providing a retail investment marketplace in the Microfinance industry. Prior to that, he was the general manager of eBay’s $5+ billion Technology and Media categories, and led eBay’s B2B wholesale initiative. In these roles, Mr. Wiley was responsible for strategy, consumer marketing, product management and customer service, and managed eBay’s relationship with many major branded retailers and manufacturers.

Karl was one of the great eBay Category Managers.  I first worked with Karl when he was part of the Business & Industrial team, which turned out to be an incredible pool of leadership talent.  At the time, Karl was the primary driver & business sponsor for product support for wholesale lots at eBay. For me, it was one of the first projects where I felt like I was truly working on features that were driven by the eBay selling community itself, and not from just internal motivation.  I learned a lot from my efforts with the B&I team, and even after the category management for wholesale lots was disbanded, I still ended up leading the course on Buying & Selling in Lots at eBay Live in 2004 & 2005.  Packed rooms, both times.

Congratulations, Karl, and best of luck with your new venture.

One minor quip, of course, is that it’s time to update your LinkedIn profile

Apple: Feature Requests for AppleTV and/or FrontRow

I’m not sure if anyone from Apple is reading this post, but hope springs eternal.

Listen, I love my AppleTV.  Every week I convert more and more of my movies to MP4 and add them to iTunes.  And I love the fact that FrontRow 2.0 in Mac OS X 10.5 (Leopard) is basically the AppleTV software.  Beautiful.  My Mac Mini, with a 500GB USB 2.0 drive, is an AppleTV on steroids.  Perfect.

I now have over 200 hours of movies and TV ripped for my two Apple boxes.

But there is a problem.  Two, really.  I need to request two key feature additions for the next dot-release of AppleTV and FrontRow:

Request 1: Please add video playlists

This should be obvious, because you already have them on the iPod, but I really need these on the AppleTV.  I have a lot of ripped TV shows and cartoons that are only 5-20 minutes in length.  What I want to do is arbitrarily create a video playlist, and have the AppleTV play continuously some number of shows in a row.  Right now, the device has no playlists.  So either I have to rip a customer MP4 of different combinations, or I have to actually manually play each show individually.

Example: School House Rock

I ripped this DVD.  37 Episodes, each 3 minutes.  Of course, I could not possibly rip the individual shows given the current AppleTV interface.  Instead, I ripped a full two-hour block of all 37 episodes back-to-back.  I then ripped smaller, 8 or 9 episode sequences based on topic.

What I should have been able to do is:

  1. Rip each 3 minute short as a separate file
  2. Create playlists of different groups and sequences of those segments

A lot of great video content is short, and makes sense to view in playlists.  Ripping different combinations into a single video file is wasteful, and clutters the interface.

Request 2: Folders

This one is easy because Tivo already figured this out with version 4.0 of their interface (they are now on 9.x I think).

Let me create folders to group together content so that I don’t have a linear list that goes on forever.  For example, if I have all 6 Star Wars movies, let me create a folder for them.  If I have 10 hours of Band of Brothers, let me group it together.

Right now, the only “grouping” functionality is through the TV Shows interface.  Frankly, that’s pretty clunky.  I’m not even sure I like breaking apart video into movies and TV shows.  I certainly don’t have that breakdown on my Tivo, and I’m not sure I like it.  I’d rather just see TV Shows as a folder of video, sorted by season, then episode, kind of like music that is broken down by artist, and then album.

True, I wouldn’t mind dynamic grouping based on tagged elements of the movies, but that’s actually overkill for now.  I’d settle for good, old-fashioned, manual folders.  A simple directory structure could help me scale the current interface with the remote to handling hundreds of movies, instead of dozens.

So, if you are listening Apple, help me out here.  Thanks.

The Limits of Quantum Computers by Scott Aaronson

I had a business trip to Boston this past week, which means I got a lot of good reading hours in on the plane ride across the country.  As a result, expect to see some intellectually inspired posts this week.

Tonight, I’m going to start off with an easy one – the most recent issue of Scientific American.  It is a great issue.

Actually, three of the articles were blog worthy.  Tonight, I’m going to highlight the great piece by Scott Aaronson called “The Limits of Quantum Computers“.

Here is a synopsis, from the top of the article:

  • Quantum computers would exploit the strange rules of quantum mechanics to process information in ways that are impossible on a standard computer.
  • They would solve certain specific problems, such as factoring integers, dramatically faster than we know how to solve them with today’s computers, but analysis suggests that for most problems quantum computers would surpass conventional ones only slightly.
  • Exotic alterations to the known laws of physics would allow construction of computers that could solve large classes of hard problems efficiently. But those alterations seem implausible. In the real world, perhaps the impossibility of efficiently solving these problems should be taken as a basic physical principle.

Nah, I don’t think that does it justice.

I’ve been following Quantum Computing off-and-on since the mid-1990s.  I took my first Automata & Complexity course at Stanford (CS 154, from Rajeev Motwani) back in 1995.  One of the truly mind-opening courses in the Computer Science undergrad.  Recognizing that there are mathematical frameworks to not just solve problems, but to describe their complexity is fascinating.

Quantum Computing is fascinating because it takes advantage of the truly strange physics of entanglement, a state in Quantum Mechanics where particles can share a matching, but unknown, fate.  A separate branch of algorithmic mathematics has sprung up around analyzing what types of problems, if any, would be simpler to solve on the basis of a computer that leveraged these “Quantum Bits” or QuBits, for short.  At the same time, molecular scientists have struggled to make progress building very small quantum computers.

To date, there are a small number of algorithms that Quantum Computers have been proven to be able to solve significantly more efficiently than traditional computers.  Interestingly, most of them revolve around factoring, which happens to be the one area that we base most of our security algorithms around.  It turns out that factoring a very large number into two primes is very difficult for normal computers, but very easy for quantum computers.

I don’t think I can summarize an 8-page detailed article here, but let’s just say that in this short article, Aaronson manages to:

  • Give a high level overview of basic complexity theory
  • Give a background on what Quantum Computing is, generally
  • Give a background on what makes Quantum Computing different, algorithmically
  • Give examples of the types of problems that QC will significantly improve
  • Give examples of the types of problems that QC will not significantly improve
  • Give interesting mathematical & physics implications of QC algorithmic theory
  • Intersperse the above with incredibly useful diagrams and drawings

Here is my favorite chart in the article – a simple one that maps the changes that quantum computing introduce in the world of algorithmic complexity:

And that’s just one of the sidebars!  🙂  It’s interesting to note that, after scanning this, I discover from Scott’s blog that he had to fight to get that diagram included!

The complexity class inclusion diagram on page 67 was a key concession I did win. (Apparently some editors felt a Venn diagram with P, NP, BQP, and PSPACE would be way too complicated, even for readers who regularly gobble down heaping helpings of M-theory.) As you can imagine, exposing people to this stuff seemed pretty important to me: this is apparently the first time P, NP, and NP-completeness have been explained at any length in Scientific American since articles by Knuth and by Lewis and Papadimitriou in the 1970’s.

Much appreciated, Scott.

Scott Aaronson has his own blog:
http://www.scottaaronson.com/blog

and he also runs an online encyclopedia for complexity classes:
http://www.complexityzoo.com

And to think, I was just at MIT and missed the chance to meet him. 🙂

The article is not yet fully online, but if you have a chance, I highly recommend picking up a copy of the issue.  Scott has posted an early draft of his article, in PDF, here.  Or better yet, subscribe.  It really is the one scientific magazine to subscribe to if you want to keep up-to-date on a broad range of scientific discovery.

Get LinkedIn on Your iPhone, Now!

It’s live, it’s live!  After weeks in beta, it’s LIVE!

This is just the first release, but already you can:

  • Search LinkedIn from your iPhone
  • View all your contacts and their full profiles, from your iPhone
  • Invite new people you meet, from your iPhone
  • Browse your network updates, from your iPhone

What are you waiting for?  You should immediately:

  1. Use your iPhone to go to: http://iphone.linkedin.com
  2. Hit the (+) button in the middle bottom control bar on your iPhone Safari.  This adds LinkedIn to be one your your default web clippings on your iPhone desktop.  An absolutely gorgeous “IN” logo will grace your iPhone.

For those of you who don’t have an iPhone, this is actually the same URL that serves http://m.linkedin.com, our general mobile application URL.  Of course, if you don’t have an iPhone yet, you might want to just add that step above (1) above.

I installed the beta of this application on my wife’s iPhone, and I play with it incessently when we’re on the road.  It’s completely addictive.

Brought to you, with love, from a major web company that develops it’s site exclusively on Mac OS X.