image missing
Date: 2024-11-23 Page is: DBtxt003.php txt00005909

Burgess COMMENTARY
It was said of British industry in the post war years that it had become globally uncompetitive because accountants had become CEOs during the 1930s when many companies were in financial difficulties. Accountants, the argument went, was the accountants were able to help sort out financial difficulties, but had no idea about what investments needed to be made to build and run an efficient factory.

Something might now be said along the same lines about economists. They have had far too influential a role in global policy formulation and management for a long time, and in the process there has been a 'gutting' of the economies of North America and Europe and a rather poor performance in terms of addressing the performance of the 'bottom billions'.

I learned about measurement as an engineer, not as an economist or accountant. One of the features of a good measure is that it stays the same whatever the circumstances. Thus, the meter as a unit of length is always the same. On the other hand a dollar as a unit of measure for anything is about as good as a rubber band would be for measuring length!

All the modelling being done by the masterful econometricians seems to me to be based on a pretty weak foundation. I think of the economy sitting on top of scientific and technical capabilities. These have changed in amazing ways over the past 50 years, and this would require models to be changing just as fundamentally. My impression is that the economic models fail dismally in handling this characteristic of the economy.

I rarely see good analysis of what changes in productivity do to the economy, neither from the academic economists nor the analysts of the business and financial community. Productivity ought to be a good thing for society, but it is really the foundation for less workers, less payroll and less buying power in society while enabling higher profits and more wealth for those that control capital.

The academics in elite institutions ... ivory towers ... should have done better. The damage powerful economists have allowed to be done in society and the economy over the past 50 years is huge, and meaningful change is yet to come.

Peter Burgess - TrueValueMetrics Multi Dimension Impact Accounting
Peter Burgess


What We’ve Learned from the Financial Crisis by Justin Fox

Macroeconomics Discovers Finance

For decades, the basic idea that governed economic thinking was that markets work: The right price will always find a buyer and a seller, and millions of buyers and sellers are far better than a few government officials at determining the right price. But then came the Great Recession, when the global financial system seemed on the verge of collapse—as did prevailing notions about how the economic and financial world is supposed to function.

The author has followed academic economics and finance as a journalist since the mid-1990s. To him, three shifts in thinking stand out: (1) Macroeconomists are realizing that it was a mistake to pay so little attention to finance. (2) Financial economists are beginning to wrestle with some of the broader consequences of what they’ve learned over the years about market misbehavior. (3) Economists’ extremely influential grip on a key component of the economic world—the corporation—may be loosening.

In the early 1930s, he concludes, policy errors by governments and central banks turned a financial crisis into a global economic disaster. In 2008 the financial shock was at least as big, but the reaction was smarter and the economic fallout less severe.

Five years ago the global financial system seemed on the verge of collapse. So did prevailing notions about how the economic and financial worlds are supposed to function.

The basic idea that had governed economic thinking for decades was that markets work. The right price will always find a buyer and a seller, and millions of buyers and sellers are far better than a few government officials at determining the right price. In the summer of 2007, though, the markets for some mortgage securities stopped functioning. Buyers and sellers simply couldn’t agree on price, and this impasse soon spread to other debt markets. Banks began to doubt one another’s solvency. Trust evaporated, and not until governments jumped in, late in 2008, to guarantee that major banks would not fail did the financial markets settle down and begin fitfully to function again.

That intervention seems to have prevented a second Great Depression—although the inhabitants of a few unfortunate countries such as Greece and Spain might beg to differ. But the economic downturn was definitely worse than any other since the Great Depression, and the world economy is still struggling to recover.

And what has been the impact on economic thinking? Seven years after the crash of 1929, John Maynard Keynes published the most influential work to come out of that era of turmoil—The General Theory of Employment, Interest and Money—yet not for at least another decade was it clear how influential that book would be. Five years after the crash of 2008 is still early to be trying to determine its intellectual consequences. Still, one can see signs of change. I’ve been following academic economics and finance as a journalist since the mid-1990s, and I’ve researched academic debates going back much further than that. To me, three shifts in thinking stand out: (1) Macroeconomists are realizing that it was a mistake to pay so little attention to finance. (2) Financial economists are beginning to wrestle with some of the broader consequences of what they’ve learned over the years about market misbehavior. (3) Economists’ extremely influential grip on a key component of the economic world—the corporation—may be loosening.

These trends are within and on the fringes of elite academia; I won’t attempt to delve into politics or public opinion in this article. That’s partly because doing so would make it impossibly broad, but also because—for the past half century at least—economic ideas born at the University of Chicago, MIT, Harvard, and the like really have tended to trickle down and change the world.

November 2013 What We’ve Learned from the Financial Crisis by Justin Fox

A wildly oversimplified history of macroeconomics might go like this: Before the 1930s the discipline didn’t really exist. The focus was simple economics—the study of how rational, self-interested people interact to set prices and drive economic activity. This offered useful insights for the long run, but it wasn’t much help in a crisis. “Economists set themselves too easy, too useless a task,” Keynes complained of his peers in 1923, “if in tempestuous seasons they can only tell us that when the storm is long past the ocean is flat again.” The economics that he and others set about constructing for tempestuous times was dubbed macroeconomics.

One important aspect was monetary policy. The U.S. economist Irving Fisher argued that price instability (inflation and deflation) was the cause of most economic turbulence and could be averted by astute central bankers. Keynes agreed with this but thought it was not enough. One of his main observations was that although an individual is perfectly rational in wanting to hunker down and hoard his money during tough times, everyone’s hunkering down at the same time only makes things worse. Government needs to step in and avert such downward spirals by temporarily spending much more than it takes in.

Before long, younger “Keynesians” were building models that depicted the economy as a sort of hydraulic system: Pump money in here, generate jobs there. For policy makers, this had the virtue of being straightforward advice. But problems arose. Milton Friedman, of the University of Chicago—an adherent of Irving Fisher’s monetarist views—argued that the economic fine-tuning envisioned by the Keynesians was impossible to get right in practice. His opinion gained ground in policy circles during the inflationary 1970s, after Keynesian methods seemed to stop working. Within academia, though, it was Friedman’s former student Robert Lucas and the “rational expectations” critique that had the biggest impact. Lucas and his intellectual allies argued that if one assumed that people were rational, forward-looking actors who adjusted their behavior when economic circumstances changed (and economists generally did assume that), then the Keynesian models simply couldn’t be right. People were too smart and markets too dynamic for stimulus spending or other government interventions to have the desired effect.

Page 2

In its most extreme form, this “new classical” macroeconomics taught that any government attempts to stabilize the economy were pointless. Before long “new Keynesians” were factoring in frictions, such as the tendency of prices and wages to be “sticky” and resist adjustment to changed economic conditions. But their “dynamic stochastic general equilibrium” models were also populated by rational individuals making forward-looking decisions; they forecast relatively gentle fluctuations and generally pointed toward only modestly activist policies. A consensus formed that a combination of steady, rule-based monetary policy and a few automatic fiscal stabilizers—such as increased unemployment insurance payments as people lose their jobs and lower tax receipts as incomes fall—were all it took to tame the business cycle. As Lucas put it in his 2003 presidential address to the American Economic Association, the “problem of depression-prevention has been solved.”

This claim was odd in that the proximate cause of the Great Depression—a breakdown of the financial system in the United States and elsewhere—hadn’t really been part of the discussion. The most compelling chapter in Keynes’s General Theory is the one about financial markets, which outlines the uncertainty and error inherent in “anticipating what average opinion expects the average opinion to be,” but his followers and his critics alike focused on the rest of the book. The leading macroeconomic theories simply didn’t pay attention to the financial sector.

Then the global financial crisis struck, with subsequent steep drops in GDP in the United States and Europe. Mainstream macroeconomic theorists came under heavy fire for having spent decades on work of almost no relevance to the current predicament. Happily, those theorists weren’t the only economists around. As Ricardo Caballero, of MIT, put it in a 2010 article, scholars on the “periphery” of macroeconomics were already “chasing many of the issues that played a central role during the current crisis, including liquidity evaporation, collateral shortages, bubbles, crises, panics, fire sales, risk-shifting, contagion, and the like.” This periphery wasn’t even all that peripheral: Ben Bernanke, at the center of the crisis-fighting campaign as chairman of the Federal Reserve, had long studied how bank failures spread economic havoc. The financial system bailout that transpired in the final months of 2008 was a combination of ideas from this periphery and improvisational crisis fighting. It’s remarkable how widespread among academic economists—even macroeconomists of the Lucas school—was the view that on the whole, it was the right thing to do.

Once the moment of panic had passed, however, unanimity quickly unraveled. In early 2009 there were essentially two working theories about what to do next. One, harking back to Keynes, was that these tempestuous times called for bold measures. With the private sector speedily retrenching, big government stimulus spending was in order, as were unconventional asset purchases and other interventions by central banks. The other theory was that with a financial meltdown averted, things were more or less back to normal. Inflation would soon again be a threat that demanded vigilance from central bankers. Big government deficits would lead to crises of investor confidence. Unemployment insurance and other aid programs would do more macroeconomic harm (by discouraging work) than good. The old rules would still apply.

The events of the past five years have delivered a pretty dramatic refutation of the second theory. Inflation has continued to recede despite the aggressive printing of money by central banks, especially the Fed. Countries that intentionally ran big fiscal deficits, such as the United States and China, weathered the crisis better than those that chose austerity, such as the UK and the Netherlands, or had it forced upon them, as did the nations on the edges of the euro zone. The clear consensus of postcrisis empirical studies is that the fiscal stimulus had a positive economic effect.

But even many stimulus backers still see it as an extraordinary measure for extraordinary times. The United States would be in much better shape to react to future financial crises, they argue, if its debts were smaller and its long-term obligations to Social Security and Medicare more manageable. The question is when to switch from crisis fighting to sound money and fiscal restraint—and it’s remarkable how crude the answers are. A good indication of the limited state of our knowledge is the controversy around the 2010 finding of the Harvard economists Carmen Reinhart and Kenneth Rogoff that economic growth experiences a sharp drop-off when a country’s debt passes 90% of GDP. (In the United States the figure currently stands either just above 100% or just below 75%, depending on whether you count debt held by the Social Security Trust Fund.) After adding in a few more years of data from three countries (Australia, Canada, and New Zealand), weighting the data differently, and fixing an Excel error, another group of economists in 2013 found no evidence whatsoever of a clear drop-off at 90% or anywhere else.

In macroeconomics, simply mining the data tends not to deliver conclusive answers—mainly because there just aren’t enough data. All we know for sure is what happened—not what would have happened had other policies been followed. That’s why theory is so important. But it’s also probably futile to hope that macroeconomic theory can ever be an entirely reliable guide.

Right now, for example, efforts are under way to build macroeconomic models that do include the financial sector. Princeton University, a hotbed of activity in this field, just held its third annual summer “camp” at which top U.S. and European graduate students in economics are brought up to speed on the intersections between macroeconomics and finance. But look at the state-of-the-art work in the field, such as “A Macroeconomic Model with a Financial Sector,” a new paper by Markus Brunnermeier and Yuliy Sannikov, who organized Princeton’s summer camp, and you realize that although it adds important insights (one is that long periods of low volatility put the economy at higher risk of a big shock), it does so by leaving out lots of other potentially important macroeconomic factors. Only by grossly oversimplifying reality have economists been able to come up with theories that have some predictive power. In macroeconomics, it’s always a question of which oversimplification is most appropriate for a given situation or era. The events of the past few years make clear that ignoring finance was a mistake. Including it, though, will bring its own dead ends and blind spots.

This essential imperfectability of macroeconomics has long led to calls for a more heterodox, open-minded approach. And there is a feeling of relative openness and possibility in the air right now. Central banks—the Bank of England in particular—seem eager to bring in ideas from outside the old macroeconomic core. The Institute for New Economic Thinking, launched by the hedge fund billionaire George Soros in 2009, is funding and disseminating unorthodox research into financial market behavior, the macroeconomic impact of income inequality, and other topics. But a more eclectic macroeconomics might just mean more answers, not clearer ones.

Finance Gets Back to the Big Picture

In academic finance, of course, nobody ever ignored the financial sector. But after a remarkable series of breakthroughs a half century ago, the field settled into a routine: Scholars kept working away at specific puzzles and anomalies but seldom considered their implications for markets or the economy as a whole.

Before the late 1950s, research on finance at business schools was practical, anecdotal, and not all that influential. Then a few economists began trying to impose order on the field, and in the early 1960s computers arrived on college campuses, enabling an explosion of quantitative, systematic research. The efficient market hypothesis (EMH) was finance’s equivalent of rational expectations; it grew out of the commonsense observation that if you figured out how to reliably beat the market, eventually enough people would imitate you so as to change the market’s behavior and render your predictions invalid. This soon evolved into a conviction that financial market prices were in some fundamental sense correct. Coupled with the capital asset pricing model, which linked the riskiness of investments to their return, the EMH became a unified and quite powerful theory of how financial markets work.

From these origins sprang useful if imperfect tools, ranging from cost-of-capital formulas for businesses to the options-pricing models that came to dominate financial risk management. Finance scholars also helped spread the idea (initially unpopular but widely accepted by the 1990s) that more power for financial markets had to be good for the economy.

By the late 1970s, though, scholars began collecting evidence that didn’t fit this framework. Financial markets were far more volatile than economic events seemed to justify. The link between “beta”—the risk measure at the heart of the capital asset pricing model—and stock returns proved tenuous. Some reliable patterns in market behavior (the value stock effect and the momentum effect) did not disappear even after finance journals published paper after paper about them. After the stock market crash of 1987, serious questions were raised about both the information content of prices and the stability of the risk measures used in finance. Researchers studying individual investing behavior found systematic violations of the premise that humans make decisions in a rational, forward-looking way. Those studying professional investors found that incentives cause them to court tail risks (that is, to follow strategies that are likely to generate positive returns most years but occasionally blow up) and to herd with other professionals (because their performance is judged against the same benchmarks). Those looking at banks found that even well-run institutions could be wiped out by panics.

But all this ferment failed to produce a coherent new story about how financial markets work and how they affect the economy. In 2005 Raghuram Rajan came close, in a now-famous presentation at the Federal Reserve Bank of Kansas City’s annual Jackson Hole conference. Rajan, a longtime University of Chicago finance professor who was then serving a stint as director of research at the International Monetary Fund (he is now the head of India’s central bank), brought together several of the strands above in a warning that the world’s vastly expanded financial markets, though they brought many benefits, might be bringing huge risks as well.

Since the crisis, research has exploded along the lines Rajan tentatively explored. The dynamics of liquidity crises and “fire sales” of financial assets have been examined in depth, as have the links between such financial phenomena and economic trouble. In contrast to the situation in macroeconomics, where it’s mostly younger scholars pushing ahead, some of the most interesting work being published in finance journals is by well-established professors out to connect the dots they didn’t connect before the crisis. The most impressive example is probably Gary Gorton, of Yale, who used to have a sideline building risk models for AIG Financial Products, one of the institutions at the heart of the financial crisis, and has since 2009 written two acclaimed books and two dozen academic papers exploring financial crises. But he’s far from alone.

What is all this research teaching us? Mainly that financial markets are prone to instability. This instability is inherent in assessing an uncertain future, and isn’t necessarily a bad thing in itself. But when paired with lots of debt, it can lead to grave economic pain. That realization has generated many calls to reduce the amount of debt in the financial system. If financial institutions funded themselves with more equity and less debt, instead of the 30-to-1 debt-to-equity ratio that prevailed on Wall Street before the crisis and still does at some European banks, they would be far less sensitive to declines in asset values. For a variety of reasons, bank executives don’t like issuing stock; when faced with higher capital requirements, they tend to reduce debt, not increase equity. Therefore, to make banks safer without shrinking financial activity overall, regulators must force them to sell more shares. Anat Admati, of Stanford, and Martin Hellwig, of the Max Planck Institute for Research on Collective Goods, have made this case most publicly, with their book The Bankers’ New Clothes, but their views are widely shared among those who study finance. (Not unanimously, though: The Brunnermeier-Sannikov paper mentioned above concludes that leverage restrictions “may do more harm than good.”)

Big Chairs Create Big Cheats by Andy Yap Comments (9)

This is an example of what’s been called macroprudential regulation. Before the crisis, both Bernanke and his immediate predecessor, Alan Greenspan, argued that although financial bubbles can wreak economic havoc, reliably identifying them ahead of time is impossible—so the Fed shouldn’t try to prick them with monetary policy. The new reasoning, most closely identified with Jeremy Stein, a Harvard economist who joined the Federal Reserve Board last year, is that even without perfect foresight the Fed and other banking agencies can use their regulatory powers to restrain bubbles and mitigate their consequences. Other macroprudential policies include requiring banks to issue debt that automatically converts to equity in times of crisis; adjusting capital requirements to the credit cycle (demanding more capital when times are good and less when they’re tough); and subjecting highly leveraged nonbanks to the sort of scrutiny that banks receive. Also, when viewed through a macroprudential lens, past regulatory pressure on banks to reduce their exposure to local, idiosyncratic risks turns out to have increased systemic risk by causing banks all over the country and even the world to stock up on the same securities and enter into similar derivatives contracts.

A few finance scholars, most persistently Thomas Philippon, of New York University, have also been looking into whether there’s a point at which the financial sector is simply too big and too rich—when it stops fueling economic growth and starts weighing on it. Others are beginning to consider whether some limits on financial innovation might not actually leave markets healthier. New kinds of securities sometimes “owe their very existence to neglected risks,” Nicola Gennaioli, of Universitat Pompeu Fabra; Andrei Shleifer, of Harvard; and Robert Vishny, of the University of Chicago, concluded in one 2012 paper. Such “false substitutes...lead to financial instability and could reduce welfare, even without the effects of excessive leverage.”

I shouldn’t overstate the intellectual shift here. Most day-to-day work in academic finance continues to involve solving small puzzles and documenting small anomalies. And some finance scholars would put far more emphasis than I do on the role that government has played in unbalancing the financial sector with guarantees and bailouts through the years. But it is nonetheless striking how widely accepted in the field is the idea that financial markets have a tendency to become unhinged, and that this tendency has economic consequences. One simple indicator: The word “bubble” appeared in 33 articles in the flagship Journal of Finance from its founding, in 1946, through the end of 1987. It has made 36 appearances in the journal just since November 2012.

Economists Start Losing Control of the Corporation

Economists have had a great run. Over the past half century they have become influential presidential advisers, taken charge of central banks, and wielded great power at major international financial organizations (the IMF and the World Bank). They are paid much more than most academics (finance professors do even better). Their methods and their ideas have infiltrated other fields, such as law and political science, and to a large extent everyday discourse as well.

Edward Lazear, of Stanford, describing this ascent in a 2000 article titled “Economic Imperialism,” attributed it to his field’s rigor. “Economics is scientific; it follows the scientific method of stating a formal refutable theory, testing the theory, and revising the theory based on the evidence,” he wrote. That’s surely part of it. But most economic theories also build upon a common foundation of self-interested individuals or companies seeking to maximize something or other (utility, profit); thus economics has an advantage over disciplines with a less unified approach. And economists cottoned on early to the dawn of the quantitative age, giving them a head start in interpreting and shaping it. Finally, economists—at least some economists—just seem to have ridden the tide of history for the past half century.

Still, one narrow way of looking at the world can’t be the only valid path toward understanding its workings. There’s also a risk that emphasizing individual self-interest above all else may even discourage some of the behaviors and attitudes that make markets work in the first place—because markets need norms and limits to function smoothly. These concerns are relevant in many fields, but in recent years they have probably been placed in starkest relief in the study of corporate governance.

The current popular conception of the corporation is of an organization that exists to maximize returns to shareholders. This is very much the work of economists. Milton Friedman made the case rhetorically with his 1970 argument in the New York Times magazine that the social responsibility of business is to increase its profits. His former students Michael Jensen and William Meckling elaborated in a widely cited 1976 academic article that described the great challenge of corporate governance as getting the “agents” (managers) to act in the interest of the “principals” (shareholders). Friedman, Jensen, and Meckling were out to counteract what they saw as a disturbing tendency among CEOs to view themselves as responsible not just to shareholders but to customers, communities, and other stakeholders—an attitude that has continued to hold sway in Japan and parts of Europe. Such diffuse accountability, the thinking went, could bring confusion, be an excuse for complacency, or enable self-dealing. As leading U.S. firms began to confront overseas competition in a big way in the 1970s, this wasn’t an idle concern.

Big Chairs Create Big Cheats by Andy Yap

This is an example of what’s been called macroprudential regulation. Before the crisis, both Bernanke and his immediate predecessor, Alan Greenspan, argued that although financial bubbles can wreak economic havoc, reliably identifying them ahead of time is impossible—so the Fed shouldn’t try to prick them with monetary policy. The new reasoning, most closely identified with Jeremy Stein, a Harvard economist who joined the Federal Reserve Board last year, is that even without perfect foresight the Fed and other banking agencies can use their regulatory powers to restrain bubbles and mitigate their consequences. Other macroprudential policies include requiring banks to issue debt that automatically converts to equity in times of crisis; adjusting capital requirements to the credit cycle (demanding more capital when times are good and less when they’re tough); and subjecting highly leveraged nonbanks to the sort of scrutiny that banks receive. Also, when viewed through a macroprudential lens, past regulatory pressure on banks to reduce their exposure to local, idiosyncratic risks turns out to have increased systemic risk by causing banks all over the country and even the world to stock up on the same securities and enter into similar derivatives contracts.

A few finance scholars, most persistently Thomas Philippon, of New York University, have also been looking into whether there’s a point at which the financial sector is simply too big and too rich—when it stops fueling economic growth and starts weighing on it. Others are beginning to consider whether some limits on financial innovation might not actually leave markets healthier. New kinds of securities sometimes “owe their very existence to neglected risks,” Nicola Gennaioli, of Universitat Pompeu Fabra; Andrei Shleifer, of Harvard; and Robert Vishny, of the University of Chicago, concluded in one 2012 paper. Such “false substitutes...lead to financial instability and could reduce welfare, even without the effects of excessive leverage.”

I shouldn’t overstate the intellectual shift here. Most day-to-day work in academic finance continues to involve solving small puzzles and documenting small anomalies. And some finance scholars would put far more emphasis than I do on the role that government has played in unbalancing the financial sector with guarantees and bailouts through the years. But it is nonetheless striking how widely accepted in the field is the idea that financial markets have a tendency to become unhinged, and that this tendency has economic consequences. One simple indicator: The word “bubble” appeared in 33 articles in the flagship Journal of Finance from its founding, in 1946, through the end of 1987. It has made 36 appearances in the journal just since November 2012.

Economists Start Losing Control of the Corporation

Economists have had a great run. Over the past half century they have become influential presidential advisers, taken charge of central banks, and wielded great power at major international financial organizations (the IMF and the World Bank). They are paid much more than most academics (finance professors do even better). Their methods and their ideas have infiltrated other fields, such as law and political science, and to a large extent everyday discourse as well.

Edward Lazear, of Stanford, describing this ascent in a 2000 article titled “Economic Imperialism,” attributed it to his field’s rigor. “Economics is scientific; it follows the scientific method of stating a formal refutable theory, testing the theory, and revising the theory based on the evidence,” he wrote. That’s surely part of it. But most economic theories also build upon a common foundation of self-interested individuals or companies seeking to maximize something or other (utility, profit); thus economics has an advantage over disciplines with a less unified approach. And economists cottoned on early to the dawn of the quantitative age, giving them a head start in interpreting and shaping it. Finally, economists—at least some economists—just seem to have ridden the tide of history for the past half century.

Still, one narrow way of looking at the world can’t be the only valid path toward understanding its workings. There’s also a risk that emphasizing individual self-interest above all else may even discourage some of the behaviors and attitudes that make markets work in the first place—because markets need norms and limits to function smoothly. These concerns are relevant in many fields, but in recent years they have probably been placed in starkest relief in the study of corporate governance.

The current popular conception of the corporation is of an organization that exists to maximize returns to shareholders. This is very much the work of economists. Milton Friedman made the case rhetorically with his 1970 argument in the New York Times magazine that the social responsibility of business is to increase its profits. His former students Michael Jensen and William Meckling elaborated in a widely cited 1976 academic article that described the great challenge of corporate governance as getting the “agents” (managers) to act in the interest of the “principals” (shareholders).

Friedman, Jensen, and Meckling were out to counteract what they saw as a disturbing tendency among CEOs to view themselves as responsible not just to shareholders but to customers, communities, and other stakeholders—an attitude that has continued to hold sway in Japan and parts of Europe. Such diffuse accountability, the thinking went, could bring confusion, be an excuse for complacency, or enable self-dealing. As leading U.S. firms began to confront overseas competition in a big way in the 1970s, this wasn’t an idle concern.


Justin Fox is the executive editor, New York, at Harvard Business Review and the author of The Myth of the Rational Market (HarperBusiness, 2009).
SITE COUNT Amazing and shiny stats
Copyright © 2005-2021 Peter Burgess. All rights reserved. This material may only be used for limited low profit purposes: e.g. socio-enviro-economic performance analysis, education and training.