The advent of the knowledge economy is not making life easier for disciplined investors. In times of uncertainty like these, character will prevail.

August 22, 2018 | François Sicart
Print Friendly, PDF & Email

The First Manufacturing Panic

Back in the mid-1980s, Japan had a high and growing trade surplus with the United States and a stubbornly strong currency. This combination allowed cash-rich Japanese companies to snap up American trophy assets such as Manhattan’s Rockefeller Center, Firestone Tire & Rubber, Columbia Pictures and, of course, much overpriced real estate.

These purchases helped fashion the pessimistic economic consensus of the time: American manufacturing was dying, overwhelmed by a more efficient Japan whose rise as an economic competitor seemed increasingly unstoppable.

Sometimes, it is easy to be contrarian. As we made routine visits of American manufacturing plants at the time, we were observing something quite different. On plant floors everywhere, we noticed personnel being instructed and trained in techniques such as zero-defect, cell-manufacturing, just-in time production systems and computer-integrated manufacturing, in which time, quality, process and the resulting elimination of waste were the new themes.

Eventually, we decided to start a research program chronicling what we saw as a revival of American manufacturing. Among a board of academic advisers, I enrolled the help of Robert S. Kaplan, then Arthur Lowes Dickinson Professor of Accounting at the Harvard Business School, and co-author with H. Thomas Johnson of Relevance Lost: The Rise and Fall of Management Accounting (Harvard Business School Press, 1987).

He wrote for us an editorial entitled: “Reported Earnings – What you see is not always what you get.” In it, he pointed out that traditional short-term (annual) financial measurements provided an inadequate and often misleading picture of a company’s efforts to improve product and process performance.

Antiquated Accounting Tools as the Culprit

Prof. Kaplan explained that American accounting relied on concepts that dated back to the 19th and early 20th centuries. The bulk of product costs in those days consisted of direct (or touch) labor and materials: financial accounting could thus measure short-term, periodic profits with reasonable accuracy. But to survive and prosper in the 1980s, companies needed to invest heavily in software, employee training, radically improved production processes, marketing and distribution channels, new product design, new organizational structures and new systems.

Unfortunately, instead of treating these investments in knowledge, know-how and infrastructure as long-term investments to be depreciated over time like other investments, the traditional financial accounting expensed them fully as they occurred.

As a result, companies making significant investment for their long-term future would show declining short-term earnings, whereas companies that deferred the investments required for long-term success could prop up their reported earnings temporarily.

Unfortunately, even when statistics on operational progress from the new manufacturing techniques mentioned earlier became available, they could not easily be integrated into traditional accounting reports. So, naturally, to try and remain competitive, U.S. companies first worked to reduce the “usual suspects” — labor and materials – which were easily identifiable.  But times had changed and the results the results on overall costs were disappointing.

Under scrutiny from Prof. Kaplan and others, the reasons became clear. Direct labor costs in most U.S. manufacturing operations amounted to only 10%-15% of total costs — even less in electronics. Overhead (administrative, research, marketing and distribution costs) often amounted for a much larger proportion of total expenses. Unless those were reduced drastically, overall cost-cutting would continue to disappoint.

It must also be remembered that in the mid-1980s, prime lending rates were more than 10% in the United States vs. less than 6% in Japan. The cost of carrying excess inventories thus represented a significant disadvantage compared to Japanese competitors. Unfortunately, manufacturers knew how to measure inventories of raw materials and of finished products, but not work-in-process inventories – a significant cost that accumulates on the plant floor while products are made or assembled. Most of manufacturers’ efforts to reduce inventories had so far been concentrated on a relatively smaller portion of total inventories, but no leap forward would be achieved until work-in-process inventories were also significantly whittled down.

Outside of old-fashioned heavy industry, the cost of direct labor and materials inventories may have been less than a third of total costs for many manufacturing enterprises. But management accounting, which is supposed to analyze the cost of each product or department, had routinely allocated overhead cost (the largest component of total cost) in exact proportion to the business’ use of much smaller direct labor and materials, which they knew how to measure.

Reporting profits on the way to the grave

For a single-product company, traditional cost accounting might not have been lethal. But in reality, at any one time, most companies should be viewed as collections of projects, some of which are in the investment stage and some in the harvest stage.

If a company invests heavily in a division in the harvest stage, which seems profitable based on old accounting concepts, and neglects another division that appears less profitable because it is investing heavily in future growth and treating all that investment as a current expense, that company has a good chance of going out of business over time.

I concluded a February 1988 paper (Ben Graham Revisited: The New Challenge of Value Investing) as follows:

“For an investment analyst, the discrepancy between operating and financial performance… penalizes the short-term earnings of many companies in the process of creating long-term value… For the time being, legwork remains more useful to value investors than the sedentary study of annual reports. Today’s value investor has no choice but to leave his desk and become a traveling investigator.”

In the 30 years since the mid-1980s, operating measurements and cost accounting have been largely modernized according to Prof. Kaplan and his colleagues’ recommendations. But we have entered an era when new reporting tools again need to be developed if we are to estimate the value of companies in the “knowledge economy.”

New Economy, New Valuation Challenge

In an August 14, 2018 book review for his blog, Bill Gates writes the following:

“Software doesn’t work like [traditional manufacturing]. Microsoft might spend a lot of money to develop the first unit of a new program, but every unit after that is virtually free to produce. Unlike the goods that powered our economy in the past, software is an intangible asset. And software isn’t the only example: data, insurance, e-books, even movies work in similar ways.

…The brilliant new book Capitalism Without Capital by Jonathan Haskel and Stian Westlake is about as good an explanation as I’ve seen. They… outline four reasons why intangible investment behaves differently:

It’s a sunk cost. If your investment doesn’t pan out, you don’t have physical assets like machinery that you can sell off to recoup some of your money.

It tends to create spillovers that can be taken advantage of by rival companies. Uber’s biggest strength is its network of drivers, but it’s not uncommon to meet an Uber driver who also picks up rides for Lyft.

It’s more scalable than a physical asset. After the initial expense of the first unit, products can be replicated ad infinitum for next to nothing.

It’s more likely to have valuable synergies with other intangible assets [in design, licensing, etc.]”

Increasingly, companies’ value originates in intellectual property — invention, concepts and knowledge, which are reflected in intangibles like software, data and design. But in this asset-light, knowledge-heavy new economy, technological change is very rapid and the risk of disruption is therefore higher than in industries with a more predictable long-term path. It is riskier to extrapolate the future earnings of innovative companies than those of more mature companies. It could even be argued, as a leading investor did some years ago, that companies in industries with a fast pace of innovation should command lower, not higher price/earnings ratios than those operating in more sedate environments.

As in the 1980s manufacturing revolution, traditional accounting today tends to recognize expected losses on intangible assets (the research invested on new products, for example) but expected gains on intellectual property are ignored because their size and timing are unknown.

Companies like Microsoft, Google, Facebook and other social networking media have demonstrated more value-creating potential than traditional businesses. Even Amazon’s mastery of logistics has arguably proven more instrumental to its success than traditional retailing skills. Yet all these New Economy leaders are hard to value because intangible assets, even those with intuitively large potential, are difficult to estimate. In the promising area of biotechnology, for example, amounts spent on research and development do not automatically translate into financial returns and, in fact, they may never do so.

Today’s valuation problem, therefore, “rhymes” with that of the mid-1980s but is in fact more serious because the proportion of assets that are intangible and immeasurable is even larger.

Guidance is not Always Governance

Initially, new-economy companies devised their own methods for treating intangibles, as an assist to help financial analysts better understand their businesses and their progress.

“But there’s no way to compare the metrics from company to company — or, in many cases, even from year to year within the same company. Alternative measures, once used fairly sparingly and shared mostly with a small group of professional investors, have become more ubiquitous and increasingly disconnected from reality… The proliferation of alternative metrics not only poses a problem for investors, but it can also harm the companies themselves by obscuring their financial health, overstating their growth prospects beyond what standard GAAP (Generally Accepted Accounting Principles) measures would support, and rewarding executives beyond what can be justified.”  (“The Pitfalls of Non-GAAP Metrics,” MIT Sloan Management Review Magazine Winter 2018)

Even more problematic are custom metrics that present an alternative view of earnings by leaving out one or more expenses required by GAAP. Some companies have been known to adjust their reported earnings to exclude mainstream expenses such as pension costs, regulatory fines, “rebranding” expenses, fees paid to directors, executive bonuses, and severance payments. By removing such expenses, some companies have been able to transform a negative earnings report into positive “pro forma” earnings.

In the New York Times of June 18, 2015, (“Tech Companies Fly High on Fantasy Accounting”) Gretchen Morgenson remarks:

“Many of the popular companies with premium-priced shares promote financial results and measures that exclude their actual costs of doing business. Among the biggest costs these companies ask investors to ignore are those associated with stock-based compensation, acquisitions and restructuring. But these are genuine expenses, so excluding them from financial reporting makes these companies’ performance look better than it actually is.

“Corporations still must report their financial results under Generally Accepted Accounting Principles, or GAAP. But they often play down those figures, advising investors to focus instead on the numbers favored by those in the executive suite — who, it just so happens, stand to gain personally from the finagling.”

She goes on to explain that such expenses often become glaringly evident when these companies have to undertake expensive stock-repurchase programs to limit the diluting effect of their generous stock grants.

In short, traditional valuation tools are ill-conceived for companies in the “knowledge economy” and while new tools are being invented and tested, it is currently hard to calculate a Price/Earnings or a Price/Book Value for a given stock since we don’t know what the true earnings or book value are.

Contrarianism Alone is not Enough

 The real “price” of a stock is not its quote in the newspaper, which alone is meaningless, but some valuation measure usually expressed as a ratio of that quote to some fundamental income or balance sheet measure: in an auction market, this reflects the supply and demand for that stock among the mass of investors.

In theory, contrarian investing should result in buying low (when stocks are not in favor) and selling high (when they are), as does value investing. However, both approaches have lost some of their precision in recent years.

For example, there are natural limits to how high or how low the price/earnings ratio can go. The price/earnings ratio should remain therefore remain within a channel, even if a broad one. Thus, investors may not be able to buy at the lowest price or to sell at the exact top but, by following the value discipline, they can improve their odds of success. Unfortunately, to calculate any value ratio, investors need to know the true value of a company’s earnings and assets, and that is difficult ln the absence of appropriate accounting tools.

As for the contrarian approach, it tends to function best at extremes of market euphoria and depression. In the absence of guideposts from value (which valuation is supposed to provide) crowd psychology is all-powerful and can sometimes yield to maximum “irrational exuberance.” This loss of reliability has been further aggravated by computer-controlled high-speed trading and trading algorithms, which further remove investment decisions from human judgement and common sense.

As value/contrarian investors, we at Sicart are facing a conundrum. On the one hand, the tools of value investing have become less reliable and on the other, contrary investing without the reference point of valuation is a less precise timing tool than in the past.

Value investors are not the only ones affected by these issues, but they are trained to work on measurable (largely historical) data. In contrast growth investors are more willing to make bets on the future and are more creative in their expectations. Both styles of investing, though, are affected by the complexities of evaluating companies in the new knowledge economy.

History and experience to the rescue

As usual, navigating the vagaries of the investment markets will require a combination of discipline and flexibility. Too much discipline may lead to rigidity and resistance to change; too much flexibility may lead to short-term trading, which has not produced lasting investment success in the past. As Warren Buffett once pointed out, using the right proportion of each quality will not be a matter of IQ but one of character.

Fortunately, this conundrum is not new. Ben Graham, responsible for one of the strictest disciplines for stock selection, wrote the following in his 1949 landmark book, The Intelligent Investor:

“The underlying principles of sound investment should not alter from decade to decade, but the application of these principles must be adapted to significant changes in the financial mechanisms and climate.”


François Sicart

August 20, 2018