Some folks sensed a "negative tone" on my part in our last Treasury Cafe post, "Answer These Questions For A Better Cash Flow Forecast", believing that I advocated an approach that would not require a lot of time, effort and attention.
In a sense, this perception has its merits - I am somewhat cautious about the forecasting process for a number of reasons, but that is by no means the whole story. However, if that is what appears closest to the surface, let's start from there and work our way forward.
Why is it that our cash flow forecasts will always be wrong?
The main objective of the cash flow forecasting process is to provide us a glimpse into the future...and therein lies the biggest problem.
Nobody can predict the future!...for a number of reasons.
Mother Nature delivers her fair share of unexpected windfalls and dissapointments to a business. Had we been forecasting in January, 2011 the cash flow generation of our Japanese business operations for the year would have been wildly off due to the earthquake and tsunami that occurred two months later. Conversely, the cash flow from sales forecast for our Chicago snowblower division would have been understated 4 years in a row (assuming we used average snowfall) from 2006-2010 .
Social factors are another potentially significant contributor to randomness. Imagine being a member of the hapless cash management staff at Abercrombie & Fitch at the beginning of this year, watching the fallout from our CEO's remarks wreak havoc on our cash inflows from sales estimates! Or, suppose we were Paula Deen's cash manager forecasting licensing and advertising revenue about 3 months ago. Would the remainder of this year be anywhere close to our forecast?
Economic and Market Conditions also contribute to uncertainty. Interest rate forecasts we made in the Summer of 2008 would be off by double or triple amounts come that Fall due to the onset of the "Great Recession". And less than a year before that, many would be stuck with investments in Auction Rate Securities because the auctions were failing and investors could not 'cash out' of their investments as planned. The market had never seen something of this magnitude ever in its history.
Daniel Kahneman, the Nobel Prize winning scientist credited with a significant role in the development of Behavioral Economics, reports on numerous studies of human behavior which shows that we are quite likely to either overestimate or underestimate the liklihood of low-probability events (original paper here).
In addition, our human forecasting process gives weight, often at the sub-conscious, outside-of-awareness-level-of-thinking, to some events while entirely excluding others (called the "Availability Hueristic"). A lot of our thought processes exist on a "what you see is all there is" basis, with the result being that if we're able to quickly call something to mind we focus on it, and if we aren't able to call something to mind we ignore it. Thus, we individually and collectively possess a stong bias that makes it extremely difficult for us to be 'comprehensive'.
Statistical models, including those frequently encountered in forecasting such as regression or time-series analysis, if well-constructed will have an error rate that approximates the normal curve. If this is the case, then we can expect about 5% of our estimates to be greater than two standard deviations from the actual values.
In other words, on average 1 day out of 20 our estimation is going to be significantly over or under, even if we have a great statistical process.
If it is possible for some people to predict the future, it is quite unlikely they are toiling away day by day in a Corporate Finance group. They are much more likely to be sipping their Pina Coladas on a beach at your favorite tropical island resort after making their fortunes at the race track or in the financial markets.
In our last post, "Answer These Questions For A Better Cash Flow Forecast", we noted that cash forecasting involves a cost / benefit tradeoff. If we want a more precise forecast we are going to have to pay for it, in terms of money, time and attention.
To see how this works, let's consider a simple example. Let's say that customer payments is a line item in our 30-day forecast, and let's further suppose that these estimates come from our sales area, who are the folks in closest contact with the customer.
Over the past year, which is approximiately 2,000 working hours (250 working days x 8 hours per day), let's say that our 2-person sales team generated $10 Million in revenue. This amounts to a revenue generation rate of about $2,500 per hour.
For the sake of improved accuracy, let's further suppose that we implement a new requirement on the sales staff to provide us with collections information that has been validated with their customer's personnel.
Joe, one of our salespeople, knows that Company A's most recent invoice is due in a week. The invoice is for $10,000. He calls over to his AP contact in order to confirm the payment date only to discover they are out of the office. After many calls to others at Company A - going from a contact in purchasing to a manager in purchasing to the office of the CFO back down to the AP manager, who places him on hold for 10 minutes while they find out who has been assigned responsibility for the invoice in question, etc., we finally arrive at the fact that the invoice has been scheduled to be paid 2 days later than originally anticipated.
By the time the exercise has been completed, Joe has spent 2 hours on this task.
Assuming Joe would have achieved the average revenue generation rate during that time, we have forgone $5,000 in additional revenue in order to be 2 days more precise in our cash forecasting accuracy. For the $10,000, let's say that the knowledge of its timing allows us to invest or avoid additional borrowing at an incremental rate of 1% (note: we're being generous with that number given today's rates!). Our total return for those 2 days is a whopping $0.55 (10000 * 1% * 2 / 360)!
Spending $5,000 to earn $0.55 is not a successful business recipe!
Suppose I tell you that there is 50% chance of rain tomorrow, and tomorrow it rains. Was I right in my forecast?
What if it did not rain? Was I right in my forecast then?
Unfortunately, there is no way to really know. When Mother Nature "rolled the dice" to determine today's weather and came up "rain", we do not know if those dice reflected a 1% chance of rain, or a 10% chance, or a 50% chance, or a 90% chance, or a 99.99% chance. We only know that it either rained or did not rain. Since we do not know the "probabilities of Mother Nature's dice throw", we cannot calibrate our model against it.
As Taleb points out in The Black Swan "You see what comes out, not the script that produces events, the generator of history."
What we would like to learn as we develop a track record is "oh, it rained today so I see that it should have been a 60% chance rather than 50%". Unfortunately, we only know that it rained.
The process of separating the outcome from a forecast's validity is difficult for many to grasp - "hey, if it rains the forecast that predicted rain was a 'good forecast'". Statistical methods rely on the 'law of large numbers'. If we roll a die and come up with a 3, we need to roll it many more times to understand that a 3 comes up 1/6 of the time, as does 1,2,4,5 and 6. If we 'forecast' a 3 and a 3 is rolled, it is not a 'good call', it is lucky.
Let's take an extreme example to emphasize the point. If your child picks up a 6-shooter loaded with 5 bullets, makes a deal with your neighbor that if they 'win' they get $1 million, put the gun to their head and pull the trigger, and survive, would you call that a "good decision"? After all, they are now $1 million richer. Taking foolish gambles are not sound forecasts even when they happen to payoff. This example illustrates that you cannot base an assessment of a decision's or prediction's quality based upon the single outcome that resulted. And because tomorrow is another day, all we are ever going to get is a single data point.
Given this litany of reasons, should we abandon the cash forecasting process?
Of course not!
As we discussed in "Answer These Questions for a Better Cash Flow Forecast", we need some assessment of our future in order to manage our liquidity, financial strategy, metrics, and potential options.
So how to reconcile the fact that we need to forecast even in the face of knowing that it will be wrong?
I can remember a conference session where the speaker emphasized that we should "hold people accountable" for the forecasting process.
In the corporate world, "holding people accountable" is generally a euphamism for "hit your objective....or else", with the "or else" being something along the lines of no bonus, becoming manager of the firm's Siberian operations, getting fired, or some other drastic form of punishment.
The problem with using this "stick" approach, as Daniel Pink discusses in his book Drive (see here for a synopsis by Checkside HR), is that it actually hinders productive, creative, collaborative problem-solving, which are exactly the forces that will make a cash-flow forecast better!
Instead, generate a sense of ownership utilizing people's intrinsic motivation instincts (what Pink calls "Management 3.0"). This can be done through regular team interaction focused on three things: 1) objective review of prior forecasts, 2) open discussion of upcoming forecasts, and 3) illustration of the organizational consequences of both.
As an example, we sit down with our forecast stakeholders and discuss the most recent prior forecast. Without allocating blame, and avoiding a scolding tone, we neutrally comment on where variances have occurred, explore the processes that led to the original forecast, and brainstorm potential methods that might realistically be deployed. We further note that because of these variances on one day we had to arrange emergency, 'late in the day' funding (which is much more expensive), thereby costing the company x.
Or, as future forecasts are developed, we can identify some of the organizational actions that will occur based on it - financing plans, timing strategies, etc. As the consequences are understood, areas where attention may not have been focussed can become apparent. "Oh, I see that extending the term is x amount more costly, perhaps the timing of this large payment can be accelerated".
Given that there are at least nine reasons why the forecast will always be wrong, approach the process with an open, 'willingness to learn' mind set rather than an 'assignment of blame' exercise. Given the many forces outside of their control, it is unreasonable to expect forecast perfection, and those who appear to do so will be resisted and lose respect.
The more information and insight we can gather, the better able we will be to develop the best forecast process possible (even though it will be wrong). The means to open the information spigot is to make conversations and discussions positive experiences, exemplary of respect for each participant's contributions and input.
People will 'clam up' if they sense that a witch-hunt is going on, and will no longer consider themselves stakeholders in the game.
Most forecast numbers, at their root, are generated from a Price-Volume relationship. A cash inflow estimate may relate to revenue (i.e. units sold times price), or accounts receivable collections (number in the 'bucket' times payout percentage), or something similar.
Assessing variance between forecast and actual along the driver lines allows us to develop insights. Is our forecast off because of volume reasons or price reasons?
Future actions can be determined based on this type of analysis. If volume is up, what are the market factors that made it so this month, and are they likely to continue or 'revert to the mean'? If payout percentage has dipped, what additional organizational resources would it take to get that figure back up to where we had originally planned?
We have established that our cash forecast will always be wrong for at least nine reasons. Practically speaking, we must be willing to consider a number of alternative environments we may be operating in during the future.
Given the critical nature of cash, we cannot use the 'expectations approach' often described in the textbooks. For example, using this approach, if our forecast has a 95% chance of being "off" by as much as 100,000 and a 5% chance of it being 1,000,000, then expectation theory would tell us to hold 145,000 each day.
Unfortunately, this doesn't help us at all!
For 95% of the time, rather than holding 100,000 in "cushion" we would be holding 145,000, thereby increasing the cost of maintaining adequate liquidity during these times.
However, for the 5% of the time where it is 1,000,000, the fact that we have 145,000 isn't going to mean anything significant, since we still won't have enough to cover the error, so we end up bankrupt all the same even though we had calculated this expectation event. We might as well have just held 100,000.
Instead, we need to have contingency plans in place for a number of different events. Is there an alternate funding source we can develop to help us deal with those 5% days, while on the others need only cushion the 100,000?
Or, can we operate as if the 1,000,000 will always occur while maintaining normal practices? This is sometimes possible.
Rather than catalog a long list of events, the impact is inevitably a time-frame issue such as "what is our 'late in the day' capacity?" or "what is our 'liquidity constrained daily market' capacity?"
For example, assume we are a firm issuing commercial paper (CP) to fund its day to day cash needs. In normal markets we can issue 100 million with no problem, while on 'liquidity event' days we can only issue 10 million. Using our cash-flow forecast, we can issue CP in such a way that our daily issuance requirements are no more than 10 million. By using this type of strategy, we have taken the impact of market shocks (due to whatever the nine reasons we know will eventually occur!) 'off the table'.
Of course, market events are not the only source of randomness, so we may need to add other contingency plans for other types of situations that may occur. The result is that we end up with a "playbook" that contains a number of activities and strategies that allow us to "sleep easy" even when the inevitible forecast errors show up, no matter the reason.
The world is unpredictable enough such that the best laid plans get laid to waste, and so it goes with our cash flow forecast. However, the process is useful even if it is not going to ever be perfect. In order to maximize this usefulness, we need to encourage "collaborative ownership", establish contingency plans, and go through the evaluation exercises in order to derive actionable insights, identify trends, and remain 'on top' of the situation.
- ::What other reasons have I overlooked that will cause a cash flow forecast to be wrong?
- ::What types of forecast contingency plans do you have in place?
- ::What process steps have you undertaken that make the process more productive?
Add to the discussion with your thoughts, comments, questions and feedback! Please share Treasury Café with others. Thank you!