OK, normally I stay away from posts that could be perceived as political. But it’s hard to comment on economic issues in the heat of this intense primary season without venturing into those dangerous waters.
I’m going to try to be careful here not be too specific about any candidate or their plans. I felt, however, that this topic was non-obvious enough that it was worth commenting on, despite the danger. I can only hope that these comments might reach the ears of all three of the currently viable candidates…
Please don’t raise capital gains taxes in this environment
Or at least, please don’t raise them without also indexing gains to inflation. It’s not a serious problem when inflation is extremely low for long periods of time, but it could be very very bad if we are, in fact, heading into an environment with a weak dollar and higher prices.
Why? Because the capital gains tax today is based on nominal gains, not real gains.
Not clear on why this is a problem? Here is an example:
Let’s say you bought a stock in 2009. It’s a good stock, but not a great one, and it returns roughly 10% per year for the next 7 years. In fact, by 2016 the stock has doubled, exactly, from $10 per share to $20 per share. Since you bought 1000 shares, you’ve just turned $10,000 into $20,000, for a $10,000 gain.
That sounds good, and you might be thinking, “Well, with a $10,000, why should I begrudge the government $2,000 or even $2,800 of that gain? After all, it’s this great country that has made that type of gain possible.”
Here’s the problem. Let’s say inflation over the next 7 years is higher than it has been. 5% instead of 3%. Well, then actually $10,000 in 2016 doesn’t buy what it did in 2009. In fact, it takes over $14,000 2016 dollars to buy the same car that $10,000 did in 2009.
But the tax man doesn’t care. The IRS still calculates your gain as $10,000, not $6,000. So $2,800 might be 28% of your nominal gain, but it’s 47% of your real return, after inflation.
Ouch.
It gets worse. If inflation manages to soar to around 8%, which it did in the 1970s, then actually that $2,800 tax becomes more than your entire real return. At 8.1%, in fact, your real return becomes negative – you end up paying a real tax of over 100% of your inflation-adjusted gains.
Double-Ouch.
That’s pretty much what happened to people in the 1970s. And it really did have a drastically negative effect on capital investment and tax collection, because rich people basically decided to either avoid capital investments, or they decided to postpone taking gains. (Little known fact, but capital gain tax revenue has increased since we lowered the rate to 15%… a combination of better market performance and likely some acceleration of people taking gains.)
Now, in the 1980s and 1990s, this wasn’t such a big deal, because we both lowered capital gains tax rates and we killed inflation. Or, at least, we wounded it. When inflation is low, and the holding periods are relatively short (under 10 years), you could argue that the inflation “tax” automatically adjusts the 15% up to something higher, but manageable.
So, I think that leaves us in a policy bind, since it’s very likely we’re headed for higher inflation in the next 10 years. In fact, you could argue that cutting the capital gains tax commensurate with the increase in inflation and the average holding period might make sense, if the goal was economic neutrality.
One solution would be to index capital gains for inflation. It’s a sticky problem, because it means that taxpayers would have to have a table of “multipliers” to apply to any investment, based on the year of investment. You would also likely have to exclude shorter holding periods to avoid trading scams, and have some sort of wash-sale like rule. But this is all doable.
If you see another path around this problem, I’d love to hear it. Right now, it feels like inflation is going to take a serious whack at capital investment if we’re not careful.
Budget conscious Mac shoppers can save a bundle on a $399 mid-level Macintosh computer running OSX called an OpenMac sold by a Florida-based company called Psystar. That beats comparable offerings from Apple, whose cheapest similar computer, a Mac Pro, starts at $2000.
Now for the catch. The Psystar computer appears to violate Apple’s end user license agreement (EULA) for Macintosh OSX, which prohibits running the operating system on anything other Apple-branded computers.
The Leopard compatible Mac is built using standard computer parts with specs that include a 2.2GHz Intel Core 2 Duo, 2GB of DDR2 memory, Integrated Intel GMA 950 Graphics, 20x DVD+/-R Drive, four USB ports and a 250GB 7200RPM drive, according to the Website MacRumors.com. I would’ve pulled the specifications from the Psystar Website myself, but the site was not functioning and, the last time I checked, displayed the message: “Site is currently offline due to the massive influx of users in the last 24 hours.”
So, obviously, coverage like this is sensationalist. This $399 machine is nowhere close to the specifications of the $1999 Mac Pro. It’s much closer to the Mac Mini, which is $599 to start, although this offers more expandability. However, everyone loves to talk about Mac clones, so you can forgive the urge to create a big story here.
Since I happen to be at Apple in 1997 at the time when Apple killed clones, I feel somehow irresponsible if I didn’t remind everyone why Apple launched clones in 1995, and why they killed clones in 1997.
Circa 1997 you could buy a Mac clone made by Power Computing, Motorola, or Umax that was faster and cheaper than anything Apple was selling.
At the time, Apple was losing OS market fast, so Mac clones were viewed as an important strategy for Apple to survive. PC World’s Charles Piller wrote: “Furthermore, no single company–no matter how creative and dynamic–can compete against an entire industry. The engine of innovation that will keep the Mac competitive has to include clone makers.”
Jobs didn’t agree with Piller’s analysis.
In one of his first major decisions as acting CEO for Apple, Jobs yanked the clone program. He saw Apple’s profits in selling computers, hardware, not licensing software. Microsoft, it was widely accepted, had already won the OS licensing race.
Sorry… this just isn’t accurate.
Gil Amelio launched the Mac clone market in a belated attempt to boost Apple marketshare. The thinking was that clone makers would expand the Mac hardware base into niches that it didn’t currently occupy, growing the base of Mac users and Mac hardware for developers to target. They assumed some small amount of cannibalization, but it was assumed that the overall pie would get bigger.
The problem was, the Mac wasn’t set up to clone easily, and Apple really didn’t have the infrastructure to support a large number of clone makers.
That, by itself, could have been just growing pains. But after just a couple years, it was clear that Apple had to kill the clone market.
Why? Economics.
The clone makers were not, in fact, expanding the Mac user base. Market share for Mac OS machines was not improving.
However, that wasn’t the worst of it. The real problem was profits.
Now, I know what you are thinking. “Microsoft has huge profits! Selling just the OS is far more profitable than selling hardware! What are you talking about, profits? Apple would mint money if they licensed the OS…”
It’s the difference between profit margin and total profit.
Let’s say Apple sells a $1500 Mac with a margin of 20%. That’s $300 in profit.
Let’s say a clone maker sells an Apple clone for $1500. Apple sells the clone maker a copy of Mac OS for $50, with a margin of 98%. That’s $49.
Uh-oh.
That’s right, to replace the margin dollars of an Apple machine, you have to sell several clones. That means the clone makers have to expand market share by 5+ machines for every one they cannibalize.
But it’s actually worse than that. Manufacturing computers has a lot of fixed costs. So, theoretically, if you cannibalize enough machines, the margins on product lines can decay. You can hit a point where you aren’t even making money on the machines you are selling, without raising prices.
Now, Apple could have raised the OS price to the clone makers, but as you can see, not enough to make a difference.
The reason I tell this story now is that, fundamentally, Apple’s economics for Mac hardware haven’t really changed that much. They still get 20% margins. And Apple hardware is still, on average, about $1200-$1500.
Now, Apple is a much bigger company, and theoretically, it could eat the profit hit today, if it wanted to. But make no mistake, it would be a real hit to profits. And that means a hit to earnings, and that means a crashing stock price. It’s not obvious how Apple can cross this chasm without multi-billion dollar dislocation in profits over a transition period.
As a final note, I really enjoyed this lesson when I learned it back in 1997. In the 1990s, it was conventional techie & MBA wisdom that OS licensing was an obvious win for Apple, and that it was something that “had to happen” for Apple to survive. Both, of course, were proven to be categorically false.
For a while, Paul Krugman was making more and more sense to me. It had me worried, because I remember distinctly feeling more and more alienated by his commentary in the past 5+ years. But since I don’t follow him that closely, the reasons why were fading from memory.
This article snapped them back into clarity. Oh boy, is this column off-base.
I think the point of his column here was to effectively claim that there is no social security crisis, that social security is doing just fine, and that the arguments against it are contradictory and specious. I’m not really sure, though, because the point of the column kind of wanders.
In any case, he is right about one thing: the arguments against the stability of social security do contradict each other. Unfortunately, what is good for the goose is good for the gander… Krugman’s arguments seem to also contradict themselves.
The fundamental argument that is correct, unfortunately, is that Social Security is going to start doing some serious damage to the Federal Budget, starting around 2018.
Krugman is correct that there is, in fact, a Social Security surplus, engineered as part of the Federal tax changes made in 1986. This surplus, however, is not saved in any sort of marketable assets. Instead, these trillions of fictional dollars have already been spent as part of the regular annual budget (yes, even after that we still run a deficit), leaving in their place special US Treasuries, redeemable in the future by the US Government.
What Krugman misses here is that US Treasuries are just an IOU that the US Government is writing to itself. US Treasuries are in fact a great asset to invest in for every single entity other than the US Government. It’s as if you decided to buy a car today by lending yourself the money. Yes, it is that silly. Guess what happens when the right hand goes back to the left hand to get payment on that loan?
How can I say that, given Social Security’s $2.3 trillion (and growing) trust fund? It’s because the fund owns nothing but Treasury securities. Normally, of course, Treasury securities are the safest thing you can hold in a retirement account. But Social Security’s Treasuries won’t help cover the program’s cash shortfall, because Social Security is part of the federal government. Having one arm of the government (Social Security) own IOUs from another arm (the Treasury) doesn’t help the government as a whole cover its bills.
Here’s why the trust fund has no financial value. Say that Social Security calls the Treasury sometime in 2017 and says it needs to cash in $20 billion of securities to cover benefit checks. The only way for the Treasury to get that money is for the rest of the government to spend $20 billion less than it otherwise would (fat chance!), collect more in taxes (ditto), or borrow $20 billion more (which is what would happen). The spend-less, collect-more, and borrow-more options are exactly what they would be if there were no trust fund. Thus, the trust fund doesn’t make it any easier for the government to cover Social Security’s cash shortfalls than if there were no trust fund.
I think Krugman does a real disservice here by pretending that this fact is some sort of charade cooked up by people who want to privatize social security. The fact is that social security, in its current structure, is part of the general budget. It has no marketable assets beyond those US Treasuries, which the US issues at its discretion anyway. The US has no sovereign wealth fund in marketable assets. That means in 2016/2017 or so, we’re going to start having to pay the piper. According to the Social Security Administration, the tab will be $96B in the red in 2020.
Sure, we can fund it with higher taxes. Or lower spending. Or both. But it’s going to start hurting as soon as it goes negative.
There are a lot of potential solutions here – but none are easy, and none erase the fact that we effectively spent our $2.3 Trillion surplus before we were supposed to. We’re going to have to pay it back, one way or another, or we’re going to have to radically rethink Social Security.
So, my apologies Mr. Krugman, but we do, in fact, have a Social Security crisis and a general budget crisis in the making. And it’s going to be in the next decade, not in 2042. My generation is going to end up paying a lot more for a lot less, assuming we even have a claim on assets at all when it’s all said and done.
There has been a lot of sensationalist talk in the past two weeks since the Bear Stearns acquisition by JP Morgan Chase. I’ve seen editorials slamming the Fed for doing too little, for doing too much, for not acting soon enough, and for acting at all.
However, I’ve seen pitifully few articles that actually explain the details of what the Federal Reserve did, and why it was so revolutionary for the almost 100-year old institution. Sure, I’ve seen commentators refer to a “$30 Billion Bailout” of Bear Stearns, especially in the context of populist rhetoric that this somehow would justify spending $30 Billion to bail out homeowners who are under-water on their mortgages. But this isn’t really a bailout.
First, the Federal Reserve action, although structured like a loan, actually seems to behave mathematically more like equity:
So far, few people have focused on what exactly the Fed is getting in exchange for supplying $29 billion to JPMorgan Chase. That’s a bit surprising because whatever the deal is, it’s far from a standard loan. The strangest twist is that even though the money goes to JPMorgan, that firm isn’t the borrower. So the Fed can’t demand repayment from JPMorgan if the Bear assets turn out to be worth less than promised.
What’s also odd is that if there’s money left after loans are paid off, the Fed gets to keep the residual value for itself. That’s what one would expect if the Fed were buying the assets, not just treating them as collateral for a loan. Vincent R. Reinhart, a former director of the Fed’s Division of Monetary Affairs and now a resident scholar at the American Enterprise Institute, said in an interview on Mar. 26: “The New York Fed is the residual claimant. That doesn’t look to me like a loan. That looks like equity.”
More detail follows down the page:
Here’s how it works: A Delaware-based limited liability company will be set up to receive, upon completion of the merger, $30 billion in various Bear holdings, such as mortgage-backed securities. The Fed will lend $29 billion to that company, which will pass all the money along to JPMorgan, Bear’s new owner. JPMorgan itself will lend $1 billion to the Delaware company. The company, managed by BlackRock Financial Management, will pay back the loans by gradually liquidating the assets. As a protection for the Fed, it gets paid back fully before JPMorgan gets back anything on its loan. The other sweetener for the Fed is that if there’s money left over even after JPMorgan gets repaid, the Fed gets it all.
From an economic perspective, this complex arrangement is functionally identical to a purchase of the Bear portfolio by the Fed—one that’s financed in small part by the subordinated $1 billion loan from JPMorgan. But the Federal Reserve Act doesn’t seem to provide for the Fed to make such equity investments. That doesn’t trouble the Fed because it argues that the $29 billion is indeed a loan—or, to use the antiquated language of the Fed’s founding legislation, a “discount” of a “note.”
This is an important point, because if the history of liquidity-impacted portfolios like LTCM are any indication, it’s very likely that an orderly disposal of the Bear Stearns assets could actually net gains in the long term. More importantly, since JP Morgan takes the first $1B in losses, the Fed actually gets a bye on the first 3.3% of net loss on the portfolio, if it exists.
Now, you could argue that the Fed has put $29B of taxpayer dollars at risk, and that is true in a sense. But the value of that risk is not the $29B number thrown around, but a complex calculation of the actual expected gain/loss here. As I mentioned, historically, these type of crunch-induced crisis portfolios actually net out positively when given the time to unwind outside of a panic situation.
The fairest criticism I’ve seen of this action to date is the result of the raised offer from $2/share to $10/share by JP Morgan, which will not only keep Bear Stearns credit holders whole, but will also net common shareholders something of a windfall. You could argue that shareholders should have received nothing, and that credit holders should have born the first risk traunche of the portfolio.
The problem with this argument is that it assumes that any other deal could have been workable and accepted in the limited timeframe caused by the run on Bear assets & liquidity. These situations always seem calmer from hindsight, and beg for Monday-morning quarterbacking, but the truth is, they are negotiated in the heat of panic, and the almost audible sound of an approaching, falling knife.
Morgan Stanley has teamed up with Van Eck Global to launch currency exchange-traded notes offering exposure to the Chinese renminbi and the Indian rupee. The Market Vectors – Chinese Renminbi/USD ETN (NYSE Arca: CNY) and Market Vectors – Indian Rupee/USD ETN (NYSE Arca: INR) are the first exchange-traded products to offer exposure to those two currencies. They launched today on NYSE Arca.
The notes are designed to go up in value when the named currency appreciates against the U.S. dollar, and down when the dollar strengthens. The ETNs are underwritten by Morgan Stanley, and Van Eck is the marketing agent. The notes charge 0.55% in annual fees.
The securities are already live and trading. Here is a quote for the Market Vectors – Chinese Renminbi ETN (CNY), here is a quote for the Market Vectors – Indian Rupee ETN (INR).
There are a few details that are worth noting. ETNs, or Exchange Traded Notes, are a relatively new innovation in indexes, and as a result, there are some grey areas around their long-term tax treatment. Both notes do not actually own the currency. Instead, you are buying a promise, from Morgan Stanley, that they will pay off a return on investment that matches the return on investment of an index that is tied to the currency. Got it? Yes, it’s two levels of indirection… almost like a HANDLE to the currency. (Bonus points to old-school Mac developers who get the reference.)
Here are three caveats from the article:
First, unlike most currency products, they earn interest based on the U.S. Federal Funds interest rate … not local interest rates.�(Although they are currently similar.)
Second, these ETNs do not pay out interest income – instead, it is added to the share value of the note.�That creates a problem for investors, as the IRS has said that investors must pay taxes each year on this notional interest … even though they won’t realize the gains until they sell the note.
Finally, ETNs are debt instruments, which means investors are exposed to the credit risk of the underlying bank. Morgan Stanley seems sound, but the current market environment could give people pause.
This is an interesting option, but likely only appropriate for tax-protected accounts. Personally, I still have a soft spot for Everbank, and it’s currency-based bank notes, CDs, and money market funds in different world currencies.
I love the web. I can’t believe we live in a time where a guy like me can actually review the presentation behind something this momentus, in close to real time.
Slice the $270m JPMorgan just agreed to pay for Bear Stearns any way you want to and still it’s a horrible end for a storied brokerage firm. To end up paying $0.25 on the dollar for the company’s $1 in headquarters real estate, in effect, and to do it in equity, no less, is an embarrassment beyond embarrassment for people collectively incapable, at least until now, of being embarrassed.
Tragic, tragic stuff, and, we can only hope, a bottom, even if one we bounce along for some time, to one of the worst periods in modern financial markets. But trust me, there is nothing in it for anything to be proud of, other than removing much of the Bear-specific counterparty risk that would have taken everyone in the financial market out in a major way during trading tomorrow.
Here is the NYT piece, from tomorrow’s newspaper, tonight, online.
Still digesting the news from the Fed yesterday on the new $200B Term Securities Lending Facility. This type of arrangement has been discussed for some time as a possibility, but its still dramatic to see it unveiled like this. This is a big deal for a couple reasons – first, it allows for 28-day loans, not just overnight, and second, it allows a much broader range of bonds as collateral, including mortgage-backed securities. Combined with the other two $100B initiatives, the Fed has opened up over half of its $700B+ balance sheet to stabilize the credit markets.
Wow.
It’s becoming fashionable in circles to doubt the Fed. I’ll be posting a book review of “Greenspan’s Bubbles: The Age of Ignorance at the Federal Reserve” soon, and I’ve seen a lot of commentary doubting Mr. Bernanke. All I can say at this point is that it is way too soon to be counting out the Fed.
They can’t work miracles, of course, but the power of almost unlimited resources is significant, if wielded properly.
The most fascinating aspect about central banking is it’s amazing foundation on the irrational and the immeasurable. In the end, it’s more about confidence than anything else. By convincing the markets that you will solve the problem, you create the confidence that increases liquidity and solves the problems. You can’t be predictable, because, like in warfare, predictability leads to people thinking steps ahead and countering your actions. Like a great General, you have to be unpredictable enough to instill fear and uncertainty in those who would fight against you, and through that uncertainty, ironically you win.
So you want uncertainty, but only the type that destabilizes those that would bet against you. You want to reduce uncertainty around the likelihood of Fed success.
Got it?
If the juxtuposition sounds funny, blame it on the fact that I read the Greenspan book and a biography of George Washington all within a two week period.
Anyway, at times like this, it’s good to remember that the guy we have at the helm, at this time, is someone whose fundamental academic expertise is the mistakes made in the 1930s Great Depression, and the mistakes made in Japan in 1990s. A quick reference from Paul Krugman:
What you probably should know is that Ben Bernanke, in his capacity as a professional economist, spent a lot of time worrying about Japan’s experience in the 1990s. (So did I.) What was so disturbing about Japan was the way monetary policy became ineffective; by the later 1990s the short-term interest rate was up against the ZLB — the “zero lower bound.” This is alternatively known as the “liquidity trap.” And once you’re there, conventional monetary policy can do no more, because interest rates can’t go below zero.
I found this article today on Money Musings about the pitfalls of trying to refinance your mortgage when you have a 2nd or HELOC on the house:
A significant number of my personal acquaintances purchased homes (newer, larger) within the last several years. Inevitably, they were also convinced that financing via an 80/20 first/second mortgage setup was the way to go. Doing so is “financially smart,” because it allows them to avoid paying private mortgage insurance.
It’s an idea that works … until it doesn’t. Consider this Baltimore resident’s story, for instance:
He needs to refi out of his nasty ARM first mortgage — he’s lucky, in that he does have decent equity in his home — but his second-mortgage holder won’t agree to a re-subordination.
Under any circumstances.
I think the 80/10/10 is more common here in the Bay Area, or at least was, back in 2003/2004. The 80/10/10 is 80% first mortgage, 10% HELOC, and 10% down payment. No mortgage ensurance, and you get a HELOC which can be useful if you need to tap assets for some reason.
This is a pretty good example of how liquidity in a market like mortgages which isn’t centrally brokered can quickly jam up.
I’ve also seen stories lately of banks literally calling due their HELOC loans with fairly short notice. Seems to be tied to people who are underwater on their houses (debt is greater than value of house). Not a good thing if you don’t have the liquidity to cover the outstanding balance, or if you were depending on your HELOC as an emergency fund.
Another lesson on why, in the end, liquidity can be one of the most important aspects of personal finance. People tend to focus on rates of return, which of course, is a good thing to focus on. But when you need money, it’s amazing how rates of return give way to the simple ability to tap assets for cash.
Earlier this month, Vanguard shaved its fees on four of its popular ETFs. Those were:
Growth ETF (AMEX: VUG), from 0.11% to 0.10%.
Value ETF (AMEX: VTV), from 0.11% to 0.10%.
Small-Cap Growth ETF (AMEX: VBK), from 0.12% to 0.11%.
Small-Cap Value ETF (AMEX: VBR), from 0.12% to 0.11%.
Also, the new Europe Pacific ETF (AMEX: VEA) wound up the year at 0.12%. The fund opened last July and was expected to assess expenses of around 0.15%.
“We originally estimated an annualized expense ratio at higher levels,” said Rebecca Cohen, a Vanguard spokesperson. “But after the year closed out, expenses wound up being less than originally estimated.”
While relatively tiny moves, the latest changes further distances Vanguard’s ETF lineup from the pack. It also brings to 18 the number of different ETFs that Vanguard has cut expense ratios on within the past four months.
The flurry of cost-cutting leaves Vanguard with an average expense ratio at 0.16%. Through year-end 2007, Lipper data showed an average ETF in the U.S. with an expense ratio of 0.53%.
“As ETFs grow in size, they generally become more efficient to run,” said Vanguard in a statement.
As a shareholder-owned company, Vanguard says its “policy has always been to pass the savings from those efficiencies through to investors. The new expense ratios reflect the lower costs of managing these products.”
This is why I am such a loyal customer of Vanguard and Vanguard financial products. Their entire brand promise is around minimizing management costs for investors, and as a result, they proactively reduce rates constantly. Unlike other institutions that use low fees as a short term “loss leader” to bring in assets, Vanguard genuinely strives for the lowest costs structure, and passes those savings on to their investors.
The idea that you can now buy an index of small-cap, domestic, growth companies for 11 basis points a year is just amazing. 11 basis points! That means if you had $10,000 invested, the annual overhead cost would be just $11. And that’s for a fairly focused index – I believe the broad based US domestic stock index ETF from Vanguard is down to just 7 basis points!
When at all possible, I tend to go with the Vanguard index ETF/Fund. In fact, since many brokerages (like Fidelity) charge exorbitant commissions on the Vanguard funds, you can now just buy the ETFs like any other stock. Pay a cheap commission once, and pay cheap expenses for decades.
Hard to beat a great product with a great cost from a great firm. Hard to beat.
I read a really interesting book on my trip to Boston last week. It’s called Greenspan’s Bubbles: The Age of Ignorance at the Federal Reserve, by William Fleckenstein. I’ve read Bill Fleckenstein’s columns on-and-off since 1999, when I found him through Herb Greenberg. He’s definitely an intelligent guy, and while he presents like a perma-bear, the reality is that he’s really just a very strong, traditional, bottoms-up fundamentals-based valuation guy.
He has a real axe to grind in this book, but I’m going to do a book review in a separate post. However, one of the topics he raised was so interesting to me, I had to write a post about it.
Summary: I think we seriously messed up our monetary policy in the 1990s.
To be most specific, I think that in the 1990s, we made a fundamental change to the way we track inflation statistics for the United States that on the surface seems logical. But unfortunately, the realities about the economics of computers are so extreme, they may have completely distorted the inflation numbers for the entire country. And if you distort the inflation numbers for the entire country, you run the risk of distorting the monetary policy of this country. In fact, if you seriously mess up inflation calculations, you also affect fiscal policy, social benefit policy, and even global economic stability.
Yeah, it could be that big.
OK, here’s the information from the book that got me thinking. It starts on Page 39, in the chapter called, “The Bubble King”. Fleckenstein explains three changes that were made to the way the US calculates consumer price inflation (CPI) in 1995:
Change 1: Move from Arithmetic to Geometric Rates. Ok, this one is perfectly legitimate. After all, inflation rates compound year to year, so calculating the rate as a geometric progression is fundamentally correct. I was actually shocked to find out we didn’t do this before, frankly. True, at low percentages, arithmatic and geometric calculations don’t always vary alot, but they do vary, and geometric is absolutely the right way to calculate the number.
For those of you asking what the difference is, let’s use this example. Say over 5 years, the price of milk goes up 50%. Arithmetic calculation would say 50%/5 = 10% per year. The problem, of course, is that if you actually raise the price by 10% per year, you get a lot more than 50% because the price increases compound each year. In Year 1, you’d go from $1.00 to $1.10, and in Year 2, you’d go to $1.21. By Year 5, you’d be at $1.61, not $1.50. It’s just like compounding interest in your savings account. Geometric calculations take this into account. Instead of 10%, they would say the inflation rate was 8.45%, which over 5 years compounds to 50%.
Doing this lowers the number reported, but it’s fundamentally the correct number to report on an annual basis. So far, so good.
Change 2: Asset Substitution. This one is a little murkier. Basically, the way that economists calculate inflation for consumer goods is that they take a representative sample of products – hundreds of them. They then track the prices for these products each year. If you’ve ever seen those funny articles that track the “price index of the 12 days of Christmas” every year, you get the idea. 🙂
Asset substitution covers the case where similar goods might be substituted by people if one rises in price more than the other. Inflation is lower for the person, because instead of buying the high priced item, they buy the lower priced item. For example, let’s say the basket of goods included a 12-ounce can of soda. If the price of soda skyrocketed for some reason, most people would not actually spend the money, but would drink less soda and more water. The extent to which that substitution happens means that the inflation rate is actually lower for people, because they don’t feel the full impact of the rise in price of soda.
Fleckenstein argues that this change was “truly absurd.” Like a lot of the analysis in the book, that’s a significant exaggeration. The truth is, the fundamental need for substitution is sound. But like any of these economic techniques, if abused, this type of power could lead to incredibly huge errors in the calculation of inflation.
Change 3: Hedonic Adjustments. OK, this is the one that has me worried. The CBO describes these as “quality adjustments”. Once again, the logic behind them is sound. It’s the execution that’s troubling. Hedonic adjustments account for the fact that if you improve the quality and features of one of the items in the basket of goods, the price might rise due to that increase in feature set, not inflation. For example, if in 2001 a Honda Civic has 145 horsepower, and in 2004 a Honda Civic has 160 horsepower, then the 2004 Honda Civic actually has 10% more horsepower than the 2001 version. To the extent that people pay for horsepower, the inflation numbers are adjusted to reflect that part of the price increase in the Honda Civic is due to increase in function, not just inflation.
Like asset substitution, this could easily be abused, since it involves a judgement call – how much has the product improved vs. how much has the price just risen due to inflation. It’s a hard line to draw, especially since in 2004 there are no new 145 horsepower Honda Civics around for an apples-to-apples comparison.
So, now that you’ve gotten your fill of Macroeconomics for the day, here’s the part where we may have wrecked our monetary policy.
Well, it’s not just Moore’s Law. It’s the pace of product improvement in the high tech industry, specifically hardware. It’s huge. It’s unbelievable. There has never been a manufactured good like it. There has never been a manufactured product, like the computer, that doubles in capability every 18 months. Hard drives double in size. I bought a 40MB external hard drive in 1993 for $200. I just bought a 1TB drive for the same price last month. That’s a 24,900% increase in storage for the same price in 15 years.
Try feeding that through “Hedonic Adjustment” and see what you get. A huge deflationary element.
Now, that wouldn’t matter, except for one thing: computers have become a decently large chunk of the US economy. Not huge mind you. The US economy is now over $13 Trillion. Computers are lucky to make up 2-3% of that. But 2-3% is actually a big number when you start feeding through it ridiculous improvements in “quality/features per dollar”.
Let me jump to page 101 of the book, in the chapter called “The Stock Bubble Bursts”:
James Grant, editor of the always insightful Grant’s Interest Rate Observer, was one skeptic who took the trouble to dissect the complicated subject that Greenspan seemed to accept at face value. In the spring of 2000, Grant published a study by Robert J. Gordon, a Northwestern University economics professor, who had prepared for the Congressional Budget Office a paper with a shocking revelation:
There has been no productivity growth acceleration in the 99% of the economy located outside the sector which manufactures computer hardware… Indeed, far from exhibiting a productivity acceleration, the productivity slowdown in manufacturing has gotten worse: when computers are stripped out of the durable manufacturing sector, there has been a further productivity slowdown in durable manufacturing in 1995-99 as compared to 1972-95, and no acceleration at all in nondurable manufacturing.
Grant backed that thunderbolt up with another study conducted by two economists, James Medoff and Andrew Harless. Their contention was that the use of a hedonic price index grossly misrepresented the actual data.
This is bad news. Bad bad news.
In case you are wondering, the fundamental question that our Federal Reserve and other governmental agencies concerned with the US economy ask themselves is how much of the growth in the economy is due to three factors:
Population growth
Productivity growth
Inflation
If our calculation of inflation is off, it drastically changes our calculation for productivity. Productivity is the measure of how much economic value is generated from one time-unit of work. The 1990s were largely heralded as a decade of re-invigorated productivity growth. It’s why some people think Robert Rubin (or Bill Clinton) were great. It’s why people believed in a new economy driven by technological progress.
The data above is disturbing. Yes, it confirms that high tech might have had phenomenal impact on our aggregate numbers. But it’s totally misleading if it turns out that 98% of the economy was not, in fact, seeing productivity growth. Worse, it’s possible computers were actually masking continued weakness in every other area.
Look, I’m fairly sure that the people responsible for collecting this data are intelligent, and that this issue has likely been raised already. It’s also possible that this book and its citations are already known and discredited.
Still, I’m left with the following thoughts:
Is the above data true? If so, does this mean the 1990s were not, in fact, a real productivity boom for the economy overall?
If these issues are true and known, is the Federal Reserve, Treasury, Congress, et al taking this into account when they make monetary and fiscal policy decisions? If inflation is understated, then interest rate cuts, fiscal stimulus, and whole host of other policy decisions could be disasterous. We could end up with HUGE inflation in everything except computers to make the numbers balance. (I feel like this is like that line from “The Matrix Reloaded” – the system is desperately trying to balance the equation)
When they make hedonic adjustments for computers, do they take actual utility into account? Sure, today’s Windows PC is 3x faster than one from five years ago, but the latest versions of Windows & Office are much more resource intensive than five years ago too. My Mac Plus booted faster than my PowerMac G5. How do they measure the hedonic adjustment for computers? Are they grotesquely over-estimating the increased value from hardware improvements, without discounting the resource requirements of software to provide equivalent “utility”?
Feel free to comment if you have pointers to information either confirming or refuting the above issues. This hits home for me as an issue that ties together two of my strongest personal interests – computers & economics.
Also, feel free to post this blog URL to other boards or forums where experts might be able to answer some of the above questions.
This is a 2.4MB PowerPoint presentation that walks through the basics of the Subprime crisis. It’s extremely funny, if you are into stick figures that use foul language. It definitely wins the award for best use of a Norwegian stick figure swearing in a PowerPoint document. (I will consider others for the award, if you post links.)
Yes, please don’t download if you are offended by any of the seven word banned by the FCC on radio. And yes, if you watch Deadwood on HBO, you will be more than OK with this deck.
Found this post on Lifehacker today. It’s actually just a pointer to this calculator on Consumerism Commentator, which lets you enter your 1040 numbers from 2007 (if you’ve done them yet) and figure out how much you are (or aren’t getting).
Personally, I’m exceptionally disappointed with the “bi-partisan” stimulus plan that was negotiated by the White House & Congress. There is a time and a place for fiscal stimulus, and a time and a place for social programs. But mixing the two rarely leads to good policy.
The Wall Street Journal had an article today that estimated that roughly 50-70% of the rebate money would end up in consumption. Previous blog posts have argued that the 2001 stimulus rebate, which was similar, was mostly ineffective.
For those of you who have actually clicked through the link about why I named this blog Psychohistory, you know that I’m fascinated by the ways in which the irrational (people) interact with the rational (math, technology, finance). In fact, to quote that original post:
As a software engineer, my primary interest was in human-computer interaction and the recognition that technology is useless without significant thought given to how people perceive and interact with it. As my interests shifted to the study of economics, I developed a deep fascination with the study of behavioral finance and the recognition that classic economic models fail to predict activity in many cases because people are often not rational actors.
These insights are fascinating to me because I firmly believe that in fact, there is a method to the madness. People are irrational in many situations, but in many cases predictably so.
So I named my blog after the fictional science, invented by Isaac Asimov, called Psychohistory, which claimed to predict the behavior of society by aggregating the behavior of unpredictable individuals.
Dan Ariely, seems to have taken a more direct approach. He’s named his blog Predictably Irrational, and is launching his first book this month with the same name. And I have to say, I’m thinking that I should have used that name instead. 🙂
Here is a brief bio of Dan Ariely, in his own words:
Predictably Irrational, is my attempt to take research findings in behavioral economics and describe them in non academic terms so that more people will learn about this type of research, discover the excitement of this field, and possibly use some of the insights to enrich their own lives. In terms of official positions, I am the Alfred P. Sloan Professor of Behavioral Economics at MIT’s Sloan School of Management and at the Media Laboratory, a founding member of the Center for Advanced Hindsight, and a visiting professor at Duke University.
I discovered his blog through a reference to his recent piece on the Societe General scandal, where a midling trader making less than 70,000 Euro a year ran up the largest trading loss on record – $7.2B dollars. Some of his insights:
Before we decide which parties are to blame, let me tell you about some experiments we recently conducted on cheating with MIT and Harvard students.
We gave a large group of students a sheet of paper with 20 simple math problems but only five minutes to solve these problems. A third of the students submitted their sheets and got paid 50 cents per correct answer. Another third were asked to tear up their worksheets, stuff the scraps into their pockets, and simply tell the experimenter their score in exchange for payment–making it possible for them to cheat. The final third were also told to tear up their worksheets and simply tell the experimenter how many questions they had answered correctly. But this time, the experimenter wouldn’t be giving them cash. Rather, she would give them a token for each question they claimed to have solved. The students would then walk 12 feet across the room to another experimenter, who would exchange each token for 50 cents.
What is the point of all of this? We had the intuition that people could easily take a pencil from work home without thinking of themselves as dishonest, but that they could not take 10¢ from a petty-cash box and feel good about themselves. In essence we wanted to find out if the insertion of a token into the transaction–a piece of valueless, nonmonetary currency–would affect the students’ honesty? Would the token make the students less honest in tallying their answers?
What were the results? The participants in the first group (who had no way to cheat) solved an average of 3.5 questions correctly (they were our control group). The participants in the second group, who tore up their worksheets, claimed to have correctly solved an average of 6.2 questions. Since we can assume that these students did not become smarter merely by tearing up their worksheets, we can attribute the 2.7 additional questions they claimed to have solved to cheating. But in terms of brazen dishonesty, the participants in the third group took the cake. They were no smarter than the previous two groups, but they claimed to have solved an average of 9.4 problems–5.9 more than the control group and 3.2 more than the group that merely ripped up the worksheets. This means that when given a chance to cheat under ordinary circumstances, the students cheated, on average, by 2.7 questions. But when they were given the same chance to cheat with nonmonetary currency, their cheating increased to 5.9–more than doubling in magnitude. What a difference there is in cheating for money versus cheating for something that is a step away from cash!
I find the implications of this fascinating, especially when extended to current thinking around executive compensation, the balance of incentives and disincentives in commerce and regulation, and even general management theory. How much of the historical “agency problem” exhibited by the misalignment of interests between management and investors might be exaggerated by this effect?
Fundamentally, there is something extremely powerful here. If it is true that humans don’t fit the classical model of rational actors, there may still be hope for creating extremely productive and efficient systems in technology and finance. If people are irrational, but in predictable patterns, then by investing time and thought into how those patterns affect behavior, we can optimize our products and services around those behaviors.
You can bet I’ll be ordering his book as soon it is available. If you’d like, click through here to buy it on Amazon.com. I do, after all, get a marginal affiliate bonus if you order it through this site.
Ironically, I’m visiting MIT next week to give a speech on behalf of LinkedIn. Maybe I’ll be lucky and have a chance to meet Prof. Ariely while I’m there.
Only problem is… despite being a print subscriber, the WSJ still prevents me from accessing their content online. Bleh. Thank goodness for Rupert Murdoch, right? 🙂 In any case, I am still scanner-equipped, so I can share the better points with you.
Check out this graph. Let it sink in.
Maybe I’m making too big a deal about this, but I found this chart incredibly fascinating. What this basically says is that if the dollar had stayed even with the Euro since 2000, then we’d have $57 Oil, not $100 Oil. So an increase, yes, but not nearly as shocking. More importantly, if the dollar was “as good as gold”, then literally the price of oil would have just barely risen at all, maybe to $30.
It makes you realize how much the topics of the day (peak oil, dependency on foreign supplies, etc) are controlled by economic perspective. I’m not saying anything about the quality of those issues, or the validity of those topics. I’m just pointing out the obvious – the sensationalist nature of seeing a high dollar value on oil is likely fueling the interest in those topics.
However, as I read this piece, it made me wonder, really, what does $100 dollar oil really mean? Does it mean that oil is dearer, or that the dollar is cheaper? Or both?
The reason I titled this post with the preface, “Statistics Matter”, is because I realized today that of all the disciplines and fields I have had the occasion to study and practice over the past 15 years, the fundamental concepts that underly the mathematics of statistics seem to always be valuable, if not essential. (In fact, Against the Gods is one of the books I recommend to people regularly). In fact, I’m probably going to blog on a couple other topics this weekend that all highlight the importance of understanding statistics.
The insight here, which is so common it’s almost trite, is the insight on correlation vs. causality. Correlation measures how often when one thing happens, a second thing also happens. The relationship between their occurrence. Causality is literally the measure of whether when one thing happens, it causes the second to happen. The confusion that normally happens is that people assume that correlation implies causality, when in many cases, it doesn’t.
In my Introduction to Statistics class, 15 years ago, they gave this example. Many people with yellow teeth also develop lung cancer. They are highly correlated. But getting your teeth whitened will not prevent lung cancer. Why? Well because there is a third thing, smoking, which actually causes both yellow teeth and lung cancer. Yellow teeth are positively correlated with lung cancer, but they don’t cause it. Seems obvious, but check out in your daily news how often you’ll see reports of studies that demonstrate nothing but correlation. Health fads are almost all started this way.
Back to Oil.
This article made me wonder – is the weak dollar the reason for $100 oil, as this article suggests, or is $100 dollar oil the cause of the weak dollar. Alternatively, is there a third cause, not mentioned, which actually is weakening the dollar and making oil more valuable?
The great thing about economics, of course, is that almost everything is inter-related. As a result, I’ve always found it very difficult to use macro-economic theory to identify causal factors, except in retrospect. (Hence the joke about economists predicting 19 of the last 7 recessions…)
I accept that one explanation, based on the data in the article, could be that oil hasn’t really become more expensive, in absolute terms. It’s the dollar that has weakened, and that makes it seem like oil is expensive to Americans.
Alternatively, it also seems plausible that since oil is a external good that is predominantly sourced from outside the US, and since there has been a historical shift from our oil-producing partners from being dollar-denominated to a more balanced basket-of-currencies, that the increasing demand for oil has shifted the marginal demand for currencies away from the dollar, and towards previously underweighted measures of value like the Euro and gold.
My bet here is that neither of the above really explains the whole situation. It seems likely that there are a large number of factors affecting the value of the dollar and the value of oil, and the end result has generated a falling dollar and rising value for commodities, including gold & oil.
This issue of causality really matters, however, because if it is in fact a weak dollar which is the causal factor, we have very limited policy options. Let me leave you with the summary thoughts from the article:
This piece of the puzzle really worries me quite a bit – if indeed the rising prices we see are a monetary phenomenon, then we are really stuck between a rock and a hard place with the mortgage/credit issues and the weak dollar. What we could actually be seeing is a magnification effect that has spanned across multiple business cycles, each time the liquidity “solutions” getting larger and larger. This time, the liquidity needed may be so large that it’s actually finally breaking the dollar. Not surprising, really, since it’s pretty easy to argue that the size of the US home mortgage market is actually big enough to really matter versus the aggregate net value and annual product of the United States.
It could be that the future has already been written in this regard – the price we’ll pay over the next 5-10 years from the housing bubble will be measured in a weaker dollar. And that will inflate everything, including our most dear commodities, like oil. We may have to face the fact that liquidity may solve market failures that surround frozen credit markets, but there will be a price to pay.
Not that it would be so terrible, given that Paul Krugman is clearly a fairly brilliant economist. But over the past few years, as he has become more and more of a shrill political voice, and less and less of a measured economic voice, I’ve found myself disagreeing with him more often than not.
First, hat’s off to Google for sharing their visiting scholar program online. I found this video, from December 14th, on Youtube this week. It’s a great talk, from Paul Krugman, about the causal elements behind the current housing liquidity crunch. (It’s over 1 hour, including Q&A. And yes, I listened to the whole thing.)
But that’s about housing, not trade. However, hearing Krugman speak live (vs. his normal Op-Ed tirades), reminded me of how intelligent and thoughtful he can be.
Interestingly, in the same week, I caught this piece from his NY Time blog (Dec 28, 2007). It’s about the current references to his original work on global trade, which is where I remember first seeing references to Krugman’s work. He references this blog post, which discusses some of Krugman’s original positions on trade in some detail.
Of course, Krugman wrote a full Op-Ed on trade in the December 28 edition of the New York Times. It’s available online here. Strangely, it’s an extremely rational piece, and it makes me wonder if his politics are moderating a bit as we get closer to the 2008 election.
Some paragraphs worth sharing:
…recently we crossed an important watershed: we now import more manufactured goods from the third world than from other advanced economies. That is, a majority of our industrial trade is now with countries that are much poorer than we are and that pay their workers much lower wages.
For the world economy as a whole — and especially for poorer nations — growing trade between high-wage and low-wage countries is a very good thing. Above all, it offers backward economies their best hope of moving up the income ladder.
But for American workers the story is much less positive. In fact, it’s hard to avoid the conclusion that growing U.S. trade with third world countries reduces the real wages of many and perhaps most workers in this country. And that reality makes the politics of trade very difficult.
This is perhaps one of the most fairly balanced assessments I’ve seen on free-trade recently. The macro-economics behind the benefits of free-trade between nations is overwhelmingly positive, in terms of the aggregate economic gains. But I’ve learned to be very suspicious of arguments that persist over long periods of time, between well-educated people, on topics that theoretically should be very simple. If they really were that simple, you’d expect that over time, most well-educated people would resolve the discussion and move on.
The oscillation between free trade and protectionism doesn’t surprise me historically at a political level – it’s pretty easy to understand why the steel worker, seeing his wages drop and/or his local plant disappear, wouldn’t push politically towards protectionism. But there has to be something more to this argument. The macro-economics of this situation are clear: cheaper foreign steel means less money for domestic steel makers, but cheaper steel for everyone else in the country. That may not be much consolation for the steel worker, but it is the answer on why free trade in the aggregate, tends to benefit the country more than it hurts it.
(No, I’m not going to touch the recent China poisoned toys issue. Yes, it’s obvious that we need some amount of regulation to prevent poisoned toothpaste and lead-painted toys, etc.)
More from Krugman’s article:
All this is textbook international economics: contrary to what people sometimes assert, economic theory says that free trade normally makes a country richer, but it doesn’t say that it’s normally good for everyone. Still, when the effects of third-world exports on U.S. wages first became an issue in the 1990s, a number of economists — myself included — looked at the data and concluded that any negative effects on U.S. wages were modest.
The trouble now is that these effects may no longer be as modest as they were, because imports of manufactured goods from the third world have grown dramatically — from just 2.5 percent of G.D.P. in 1990 to 6 percent in 2006.
And the biggest growth in imports has come from countries with very low wages. The original “newly industrializing economies” exporting manufactured goods — South Korea, Taiwan, Hong Kong and Singapore — paid wages that were about 25 percent of U.S. levels in 1990. Since then, however, the sources of our imports have shifted to Mexico, where wages are only 11 percent of the U.S. level, and China, where they’re only about 3 percent or 4 percent.
This is interesting. Theoretically, it has always been roughly assumed that the high wage countries compensate for their wages, somewhat, with high productivity. More value created per worker, usually due to heavy investment in education, capital, infrastructure, and low-risk environments. But it’s possible that while mostly true, that logic reaches it’s limit at some point. If labor in some countries is priced at 3-11% of US costs, and our trade shifts meaningfully in that direction, then that becomes a competitive depression on wages.
Once again, the economics are fairly clear that in the aggregate, those lower wages should mean cheaper goods for everyone. But if a large percentage of our population faces this pressure all at once, it could lead to some extremely negative adjustment periods for not just those people, but for the entire economy. This, in fact, is a potential explanation for some of the income disparity we’ve been seeing this decade as trade has shifted to China & Mexico.
One flaw I can see here already, potentially, is that a ever-declining percentage of our workforce is in manufacturing. The last number I recall seeing was as low as 19%. (please comment if I’m mistaken here). It’s tough to get to “most workers in this country” from there.
Now here is the part that scares me a bit – Paul Krugman’s conclusion:
So am I arguing for protectionism? No. Those who think that globalization is always and everywhere a bad thing are wrong. On the contrary, keeping world markets relatively open is crucial to the hopes of billions of people.
But I am arguing for an end to the finger-wagging, the accusation either of not understanding economics or of kowtowing to special interests that tends to be the editorial response to politicians who express skepticism about the benefits of free-trade agreements.
It’s often claimed that limits on trade benefit only a small number of Americans, while hurting the vast majority. That’s still true of things like the import quota on sugar. But when it comes to manufactured goods, it’s at least arguable that the reverse is true. The highly educated workers who clearly benefit from growing trade with third-world economies are a minority, greatly outnumbered by those who probably lose.
As I said, I’m not a protectionist. For the sake of the world as a whole, I hope that we respond to the trouble with trade not by shutting trade down, but by doing things like strengthening the social safety net. But those who are worried about trade have a point, and deserve some respect.
He’s right, the critics have a point.
Too often, opponents to free trade are kowtowing to special interests or misunderstanding economics. Despite that fact, however, it’s clear that there are some significant macro-economic impacts from free trade that can not be brushed away, particularly around wage pressure and the percentage of the population affected.
I haven’t had time to formulate my own theories on how to weave through this complexity, but chances are that there is some analysis that could better quantify the impact of wage pressure of a given trade relationship. That would give some guidance about when to slow down the pace of opening markets to phase in the pressures rather than having them catastrophically adjust over relatively small time periods.