Sunday, March 25, 2012

Apple’s Dividend – Good Financial Strategy?

Apple Computer announced this week that they were going to initiate a $2.65 per share quarterly dividend along with a $10 billion share buyback program, for a total distribution to shareholders over the next 3 years of approximately $40 billion.
This decision reportedly goes against a long-standing Steve Jobs philosophy of not making dividend payments and holding on to cash in order to fund acquisitions and other investment opportunities.
Was this the correct approach for Apple to take?

Does This Decision Even Matter?
Whether an organization pays dividends, executes a share buyback, or holds on to cash for future investment is irrelevant under a perfect capital market set of assumptions. These assumptions are:
·         Perfect Market Information by all Participants
·         Absence of Taxes
·         Absence of Financial Distress Costs
·         Rational Investors and Management
·         Absence of Transaction Costs
We all know that these assumptions do not hold true in real life. But as we discussed in CAPM - The Theory of Theory they are a convenient way to simplify the items under consideration in order to focus on explanation.

How is it that Dividend Decisions do not Matter?
In order to illustrate why the decision is irrelevant, we will assume that we have a 10% opportunity cost of capital and we invest in a firm that earns cash at a rate of ²10 per year (the symbol ² stands for Treasury Café Monetary Units, or TCMU’s) and this will continue forever. We also assume that the firm’s cash flow comes from investments that require a 10% return for the amount of risk they entail (i.e. the cost of capital is the same as ours).

Figure A - Dividend Discount Model
We will use the equation shown in Figure A (which is called the Dividend Discount Model or the Gordon Growth Model, depending on who you are listening to) to analyze the firm’s decision to do one of two things – a) pay ²10 to investors as a dividend tomorrow, or b) retain ²10 in the company and undertake a new investment that will forever earn 10%, which is the firm’s cost of capital
Pay ²10 to Investors – in this case D = ²10, r = 10%, and g = 0%, and therefore by the equation in Figure A we know our shares are worth ²100. In addition, we will have ²10 in cash from our dividend payment, for a total of ²110.
Retain ²10 in the firm and make a new investment – at first glance the Figure A equation does not immediately help us because D = ²0; the firm is keeping the money to invest. But we do know that the firm is earning ²10 right now, and with an additional ²10 in investment will earn ²11 next year. If we assume that this will be a dividend next year, we use D = ²11 in ur equation and the value of our shares are worth ²110.
In both scenarios we arrive at a value of ²110, and therefore as an investor we are indifferent between the two. One hundred percent dividend or one hundred percent retention? It is all the same to us! Both are equally valuable.
The same conclusion, under the perfect market assumptions discussed above, would hold true for the share buyback as well (but you will need to take my word for it for now until we cover this in another post).

Let’s Talk Reality
As mentioned, the perfect capital market assumptions do not work in reality. Why? Let’s take each assumption in turn with a small example:
·         Perfect Market Information by all Participants – do you know more than the management about what is going on in the company?
·         Absence of Taxes – this will occur when there is absence of death as well!
·         Absence of Financial Distress Costs – nobody was hurt by the 2008 financial crisis, were they?
·         Rational Investors and Management – people are irrational, at least sometimes, and some more than others, always.
·         Absence of Transaction Costs – have you been able to buy a stock, bond, or commodity for free, and/or at the same rate at which it would be bought back from you?

Why Have the Theory?
So why do we go through the shenanigans in the last section if none (let alone all) of these assumptions actually applies?
The answer to this question is that by establishing the case that investors are indifferent between dividends, firm retention, or share repurchases under perfect capital market assumptions creates a baseline that we can use as we progress to the next level. By establishing this baseline, we can make forays away from it without “losing our way”, much as we might unwind a string as we begin exploring a cave. If we need to pursue another line of thought, we are able to find our way back in order to investigate a different path.

First Foray – Imperfect Information
Our first foray will be to look at what happens when all market participants do not have the same amount or type of information, which of course is the case in the real world. Nobody can have information on everything, especially in this world of Big Data.
Since investors do not have the same information as management, company actions and decisions contain an element of communication within them, a process often referred to as “signaling”.
In the case of dividends, the company is communicating its belief in one or more of the following statements:
·         We are confident enough in our cash flow projections to cover this outlay
·         We have more cash than we need
·         We are confident that if we need more cash later, we can get it

The Apple Example

Figure B - Net Income and Dividend
Now we turn to the case of Apple. Figure B shows Net Income over the past 4 years (the orange line) vs. the Announced Dividend (the tan line).
From this we can deduce that Apple has earned enough to cover its prospective dividend the last two years, but did not for the two years prior. Whether two years of experience is a sufficient length of time to decide that the firm can cover its dividend needs to be explored.
One might legitimately worry that should innovation cease to continue at Apple’s historical pace and competitors catch up or leap frog with their product strategies, the 2010-2011 time period might be fondly remembered in the future by Apple as “the good ole days”.
However, if that were a reasonable risk of occurrence, one would conclude that management would not set the dividend level at a rate that could not realistically be covered. So the decision appears to represent confidence in a continued pipeline of great products that will sell really well. They know the pipeline better than we do!
Figure C - Funds From Operations vs. Cap Ex
Just because a firm earns enough from an income statement view, we need to evaluate from a cash point of view as well. Figure C shows Apple’s Funds from Operations (the orange blocks) compared to its Capital Expenditure and Acquisition activity (the tan blocks) over the past four years. This graph shows more than sufficient coverage for a dividend payment in three of the past four years.
The fact that this coverage is more consistent and conservative can either indicate management confidence as indicated above, or that the rate was set so low as to be meaningless with its respect to signaling.
Part of the stated cash-hoard objective of Mr. Jobs was to retain enough cash to go about acquisition opportunities with minimal disruption. Will the dividend decision make a dent in this “financial flexibility”?
Reviewing Apple’s acquisition activity, in any given year they have not exceeded $1 Billion annually at any time (not counting transactions for which no value has been provided). Given that Apple is starting with a “$98 Billion war chest”, reducing this by $10 Billion for a share buyback and as a backstop should operating activities fail to cover the dividend leaves ample (about 80 years worth!) “strategic cushion” for the type of acquisition strategy it has historically pursued. Therefore, the dividend does not appear relevant from a signaling standpoint from this perspective.

Key Takeaways
Dividend policy in the real world diverges from that of pure theory due to a number of factors. One of these is the signaling effect in a world where one party holds a lot more information than the other. Apple’s dividend decision in this context signifies a neutral to conservatively positive expectation as to future performance.

Questions
·         What decision would you have made in Apple’s shoes?

Add to the discussion with your thoughts, comments, questions and feedback! Please share Treasury Café with others. Thank you!

Thursday, March 15, 2012

Who Will Be in the Final Four? – Lessons from an Analytical Journey

The NCAA tournament usually sneaks up on me to the point where I am filling out my brackets on Thursday morning having paid little thought or attention to any sort of rational strategy.
My thought was to run a Monte Carlo analysis (surprise, surprise) of different strategies and from this select picks for my brackets. The following is an account of this “analytical journey” with some lessons about analysis thrown in along the way.

Data Please!
Figure A
The first step in an analytical process is data - without numbers to crunch it is called “qualitative” analysis as opposed to “quantitative”. Qualitative analysis is appropriate in many situations, but for a Monte Carlo one needs to establish parameters that will change, and for that we need numbers.
From Shawn Seigel, I found a data set that showed survivorship for each seeded team for each round of the tournament. Figure A portrays the initial data, which shows number of times each seed has gone from one round to the next.
This information did not load nicely into Excel, and required manual entry.
Lesson #1 – Data is rarely in the form required to be used, and there will be time spent preparing it.
By dividing the previous rounds survival by the current rounds, we can arrive at a probability of victory. For example, in Figure A 96 of the #2 seeded teams survived the first round, and of those 64 survived the second, for a probability of 2/3.
While these probabilities allow us to calculate an expected value for arriving at the final four, this does not serve the Monte Carlo objective, since it is a static calculation without any parameters that we can vary from one iteration to the next.
Lesson #2 – to achieve our analytical objective we often need to go beyond the initial data.

Let’s Start Crunching Data!
Figure B
For our NCAA survivor dataset, we can use the binomial distribution to create a varying parameter, namely the probability itself. This distribution tells us that given a certain number of occurrences (n) and individual event probability (p), we can ascertain the probability of all potential combinations of occurrences.
For example, Figure B shows the probability of how often heads shows up when we flip a coin 10 times (i.e n=10, p=.5). Reading from this, the chance of getting exactly 5 out of 10 heads is almost 25% (even though 5 out of 10 is the expected value), while getting 4 or 6 heads (close to the expected 5) is slightly over 20% each. Thus, approximately 2/3 of the time we will get 4-6 heads if we flip a coin 10 times (my apologies to the academically oriented – 10 flips of a fair coin).
Figure C
The reason this is a benefit is that given we want events to vary, we can define parameters within which variation may occur. Using the Figure B data, if we want to incorporate the number of situations that will occur 95% of the time, then we will need to use results between 2 and 8 (97% chance) and 3 and 7 (89% chance).
Using this method, Figure C shows the survivorship given a high and low bound (equivalent to the 95% chance) for each of the first four rounds of the tournament. In Round One, the results are fairly linear. This is as expected, since a #1 seed plays a #16, a #2 plays a #15, etc. the probabilities for the top 8 teams should be a mirror image of the bottom eight. The orange line in the graph depicts a straight 45 degree angle, with which the results tend to agree.
Problems begin to develop in Round Two (and get worse thereafter). The drop off is steeper than the 45 degree angle, suggesting that teams below the #1 seed are much more equal in strength than their seed rankings suggest. Furthermore, there is a higher probability of the #10-12 seeds winning their second round game than #7-9. Does this make sense?
Lesson #3 – think about what your data is telling you.

Ring-Ring…..Ring-Ring…….Data Here….I’m Telling You Something!
When thinking about the structure of the NCAA tournament, we can see why this is the case. A team seeded #8 plays the team seeded #9 in the first round, so one of those two will survive to the second round. However, during that round they in all likelihood will face the #1 seed in the tournament (to date the #1 seeds have 100% survivorship in Round One). Is it any wonder then that a #8 or #9 seeded team will not have a very good Round Two record?
Compare that against a #11 seeded team, who might face the #3 or #14 seeds. Looking at Figure A again, we see that a #14 seed team has made it to Round Two 15 times and the #3 seed 85 times. Thus, 15% of the time the #11 seeded team is ranked higher than their opponent.
The fact that different teams face different odds as the tournament progresses is what is known as path-dependence. When data exhibits path-dependence, we need to adjust our analytical methods to account for it.
Lesson #4 – the data will throw you curve balls that you will need to approach from a different angle or a new direction

More Data, Please!
Fortunately, seed vs. seed data was available here, though as per Lesson #1, this data again required significant processing to get it into a format compatible with Excel and with R (a statistical software package). Curiously, this data set did not show up in my first search, and looking into this I discovered that I had used the “seed by seed” phrase in the second one, whereas I used “seed by round” in the first.
Figure D
Lesson #5 – when searching for data, take a multiple query approach because small distinctions will matter

Round by Round
A #1 seed, after advancing to the second round (which has occurred 100% of the time to date), will play either the #8 or #9 seeds. Figure D in the top panel shows the probability of victory along with confidence intervals around that probability. The orange line depicts 50% probability. The bottom panel shows number of games played against that seed. In this case, it is fairly even whether they will play a #8 or #9 seed in the second round.
Figure E
In the third round, the #1 seeds continue to dominate, as shown in Figure E. Probability of victory begins at 62% against #4 seeds and continues upward from there. Problems begin to emerge here, however, where the sample size becomes smaller. The results for the #12 and #13 seeds are more problematic. At the historical 100% chance of victory, there is no variability with which to simulate through the Monte Carlo model.
Figure F
In the fourth round, #1 seeds have played a few games against #7, #10 and #11 seeds. Figure F shows the 4th round data, and it looks strange. There does not appear to be a logical reason why the probability of victory against a #11 should be so low compared to the others. However, the number of times this has occurred give us a clue as to why. A #1 has played a #11 only 5 times. This is certainly not a sufficient level to establish a credible estimate of the true probability of occurrence.
The problem of low sample size preventing reliable estimates to be made becomes greater as we evaluate lower seeded teams. Figure G has the results for the #4 seed in the 4th Round of the tournament. Taken at face value, they are expected to better against a #2 than a #3, and have only a 50% chance of beating a #7. But again, the number chart shows the weakness in these conclusions, the #4 has not played more than 10 games against any of these opponents.
Figure G
In order to fill in these gaps, we can do a number of things. For our #4 seed, we could combine estimates of the individual seed results into groups, such as #2 and #3, and #6, #7, and #11. For our #1 seed in the 4th round, we could combine the results of games against #7, #10 and #11 seeds in which case we would have a 80% probability of victory. This is in line with the slope of the graph in Figure F.
Combining seed “buckets”, while increasing our estimation power, has the problem of requiring us to map these combined results back to the individual seeds. If #1 plays a #7 rather than a #11 in the 4th round, how much are we going to “shade” our 80% probability? Presumably we would assign a lower than 80% chance, say 75%, whereas the #10 we could assign the average of 80%, and the #11 we will assign a slightly higher than average of 85%. From a purely analytical point of view, this method is somewhat arbitrary, though it might appear reasonable.
Given the sheer magnitude of adjustments that would be made (there are seed vs. seed combinations that have never occurred for which we would need to create variables), and the underlying uncertainty surrounding the reliability of these assignment methods, establishing the parameters to run a Monte Carlo would be a very time consuming process and the results questionable. For that reason we are going to abandon our Monte Carlo objective.
Lesson #6 – we must be willing to revise our objectives and our approach
Lesson #7 – much as we might like, a Monte Carlo is not always possible

So Who To Pick?
Given most brackets award higher points as the tournament progresses, we will go back to Figure A and note that over 90% of the teams that win the Finals, the Final Four, and Elite Eight games are the top 4 seeds, and that the #1 seeds are more than twice as likely to win than the next best seed (which in all cases was the #2).
Based on this, going with the #1 seeds in the Final Four is the best shot you can take to win.
Lesson #8 – Simple is oftentimes better

Key Takeaways
There are many lessons to be learned about statistical analysis, and these can be loosely classified into the following axioms – data availability and usability is a critical factor, appreciating the limitations to what the data can tell us is required, and we must be flexible in our approach to the analysis, objectives and results.
Questions
·         Who are your Final Four picks?

Add to the discussion with your thoughts, comments, questions and feedback! Please share Treasury Café with others. Thank you!

Friday, March 9, 2012

Working Capital Finance and Accounts Payable

Up until now we have looked at the Cash Conversion Cycle components on the asset side of the balance sheet, inventory and accounts receivable, along with various ways we can improve and optimize these metrics.
Let’s not neglect the other side – there are opportunities there as well!

A Blinding Flash of the Obvious – Why Simple is not Always Best
If we want to extend Days Payable Outstanding (DPO), which serves to reduce the Cash Conversion Cycle and hence our Working Capital investment, the most obvious thing to do is to pay our bills later! If we are currently on 30 day terms, we go for 45 day terms. If we are on 45 day terms, we go for 60 day terms.
There are a couple of big issues around this tactic, unfortunately. If we unilaterally decide to this, pay our bills in 45 days when the supplier wants it in 30 days, we begin to earn the reputation of a deadbeat. Our suppliers do not like us anymore, and at some point cut us off.
Even if things are not this dire and we negotiate a payment term extension, this approach is a zero-sum game. Yes, we extend our DPO and thereby decrease our working capital investment. But our supplier’s DSO increases by the same amount, thereby increasing their need for working capital.
By forcing our supplier to increase their investments, they will logically require additional return to compensate those investors, and ultimately this will be reflected in the price we pay for our supplies…at least in theory.
Even if they “eat the cost”, there is an element of sleaziness to this tactic that we may not wish to be associated with us.

Don’t Ignore the Converse
Extending our DPO is at direct odds with another common supply chain/accounts payable metric – Discounts Taken.
This term refers to the fact that some suppliers will sell on terms where if the payment is made within a shorter timeframe, a discount will be granted. One of the common terms is payment in 10 days gets a 2% discount off the invoice price, whereas 100% of the invoice price will be due if we pay in 30 days. This is called “2% 10 / Net 30”.
The disadvantage to this approach is we need to finance our payables 20 days sooner, so we actually end up extending our cash conversion cycle. Yet, a 2% discount is significant. For 20 days time, this translates into a very, very high rate of return on a per annum basis.
Using simple math and rounding, 20 day cycles occur 18 times per year, so this translates into a 36% annual rate of return. Even investments in hot growth stocks seldom yield this kind of return, and the risk is a lot more than what we face (which is none – payment is in our complete control). So if our cost of capital is 10%, 15%, or even 20%, we actually can “lock-in” a gain by paying early if we get discounts of 2%.
For this reason, one needs to be careful using DPO or the Cash Conversion Cycle as a metric, as it can incentivize costly behavior.

It’s All Relative
If we decide we want to extend our payment terms, we are much more likely to do this with the vendor’s consent (thus making us less “sleaze-bally”) if we follow a simple premise from the strategy realm.
One of the Five Forces of Strategy is Buyer Power. This term is used to describe the situation where the buyer has leverage over the seller in the market.
One way to think about this without the terminology is to think about this question in relation to our ability to get more favorable payment terms – is it better to be a small fish in a big pond or a big fish in a small pond?
If we are our supplier’s “big fish”, they are going to be a lot more likely to be happy about letting us go to 45 day terms from 30 day terms. They might even offer it up!

Roll With the Flow
One way to extend our DPO without directly hurting our suppliers is to make use of credit card payment systems and programs.
If our friendly banker has established a purchasing card program for us, if we pay a vendor by credit card on the last day of the invoice term, say 30 days, there is an additional period of time where we will not have to pay if we do it by credit card. As a consumer, we will get an additional 25 or 30 days. Most businesses are on a “tighter leash” when it comes to this, but it could be close to the consumer’s profile depending on the situation.
I make the comment that we are not “directly” hurting our supplier because usually it is the seller’s account that is hit with credit card transaction fees. We pay on the 30th day by credit card, they get their money soon thereafter, but they also get charged for our payment with an “interchange fee”. So they do not realize the full value of the sale. This compensates the credit card issuer for any time value of money components to additional time between their payment to the supplier and their receipt of funds from us.

Get Trendy
One of the big buzzwords we hear these days in the working capital world is the concept of “Supply Chain Finance”. Essentially this means that a bank is willing to make a loan, and when they do this they earn interest.
The typical situation presented when they attempt to sell these products is where we wish to extend our DPO but our supplier does not want to suffer from the increase in DSO (it is a zero-sum game, remember?). If the supplier can get their payment earlier, while we make our payment at the same point in the cycle that we always have, someone has to bridge the gap between these days.
Who does this? Our friendly neighborhood bank, of course!
Why do they do this? Because either the supplier, by virtue of executing an early withdrawal, gives up a small percentage of the sale, or the buyer (i.e us!), by deciding not to immediately fund the suppliers payment, pays a little bit extra (i.e. interest) when paying at the end of the “normal” payment period.

Key Takeaways
Extending our Days Payable Outstanding will enhance our Cash Conversion Cycle and therefore reduce our cost of capital. This can be accomplished by paying bills later, working with suppliers where we are a “big fish”, and utilizing banking solutions such as credit cards or supply chain finance. We need to be careful that we correctly ascertain cost vs. benefit trade-offs in these situations.

Questions
·         What are your favorite methods to extend Days Payable Outstanding?

Add to the discussion with your thoughts, comments, questions and feedback! Please share Treasury Café with others. Thank you!

Friday, March 2, 2012

Working Capital Inventory - The Critical Fractile Method

Up to this point, we have explored the inventory elements that comprise the Cash Conversion Cycle in a static fashion.
In Working Capital – Finance and Inventory we looked at how inventory can impact our risk and cost of capital.
In Working Capital – Economic Order Quantity we looked at a common formula to optimize our inventory ordering or production activity, with an exploration of its strengths, weaknesses, and hidden “traps”.
Today we turn to the first method that incorporates probability into the mix. This has the great advantage of reflecting the uncertainty involved in operating a business, a fact that has been ignored or assumed away in methods up to this point.
In other words, it’s a little more realistic, isn’t it?

The Critical Fractile Model
The Critical Fractile model, in one sense, is an incremental method of viewing inventory ordering or production. By “incremental” we mean we can use it to look at things from a “one step at a time” viewpoint - if we are at 10, we look at 9 and 11. If we are at 100, we look at 99 and 101.
To take a numerical example, let’s say we operate a hot dog cart on a street corner sidewalk of your favorite city (mine is Chicago, of course, though once I visit Hong Kong next month, who knows!).
Our hot dogs cost us ²1 each (for new readers, ² is the symbol for Treasury Café Monetary Units, or TCMU’s). We sell our hot dogs for ²3 each, so for each sale our margin is ²2. Currently we buy 50 hot dogs per day - some days we sell out and some days we don’t. As our pushcart requires cooked hot dogs, they are not reusable so all left over items are discarded.
Figure A
In an effort to improve our performance, we have tasked our highly skilled finance and treasury analysts to crunch some Big Data, and from that analysis have determined that for our street corner the hot dog demand averages 50 with a standard deviation of 20 and is in the form of a normal distribution. Given this distribution, Figure A shows the probability a hot dog is needed at all possible values between 0 and 100.
Earlier we noted we have been stocking 50 hot dogs. Is this optimal, or would we be better off stocking only 49 instead? When using the Critical Fractile method, we compare the incremental probability of gain and loss. At issue currently is the 50th hot dog.
Figure B
Because we know the demand probabilities, we can calculate our “Expected Value” for carrying the 50th hot dog. The term Expected Value comes from probability theory, and in this case can be represented by the equation in Figure B.
For the 50th hot dog in our inventory, we have a 50% chance of selling it (pG =.5, G = ²2) and a 50% chance of not selling it (pL =.5, L = ²1). Therefore, by our equation in Figure B, our expected value from carrying the 50th hot dog is ²0.50. Since this is a positive contribution, stocking the 50th hot dog makes sense.
Perhaps then it makes sense to stock 51 hot dogs? Based on our normal curve, there is a 52% probability this will not be sold, and a 48% probability that it will. Given these probabilities and our gain and loss figures of ²2 and ²1, using the equation in Figure B we can calculate that our expected value from carrying the 51st hot dog is ²0.44. Since this is a positive number, we will maximize our business by carrying this additional unit.

From Incremental to Cumulative
Figure C
In Figure C, the orange line shows the incremental contribution for each hot dog stocked (units of measurement are on the left vertical axis). Notice the orange line starts on the left-hand side of the graph near ²2. This is the same amount we gain on the sale of a hot dog. Since the probability of selling 1 hot dog is well over 99%, it makes sense that the line is almost ²2 on the left-hand side.  
On the right hand side the orange line is negative. If we stocked the 100th hot dog, we would almost never sell it (less than 1% of the time), so it is very close to the cost of holding inventory we cannot sell: ²1.
The dark brown line on the graph is our cumulative earnings (it’s measurement is the right axis). On the very left hand side, the brown line starts around ²2 (Why? Because if we only had 1 hot dog, and almost always sold it, we would make almost ²2 per day). It goes to ²10 pretty quick, because even the probability of selling the 5th hot dog is over 98%, so somewhere between 5 and 6 hot dogs per day the brown line reflects ²10 in earnings.

The Critical Fractile – and Grand Finale
In Figure C, our cumulative earnings, reflected by the brown line, is hump shaped. The top of that hump is our maximum earnings potential given the combination of our gains, losses and probability distribution.
Figure D
Notice that the peak earnings coincide to the point on the graph where the orange line, our incremental gain or loss based on probability, goes below 0. This makes sense – if each additional hot dog has an expected loss, then our earnings will be lowered by these additional losses as we add them. Calculus lovers will note that the incremental earnings for each unit is a derivative measure of our total earnings function.
Figure D shows the Critical Fractile equation, which is a ratio of the incremental gain vs. the combined incremental gain and loss. In our case, since the gain is ²2, and the gain and loss combined is ²2 + ²1 = ²3, then the Critical Fractile equation equals .6666~.
Figure E shows what we can do with this equation. The graph is our cumulative probability of a hot dog not being needed (i.e. a mirror image of what we looked at earlier in Figure A).
Figure E
The orange line is the Critical Fractile: 0.6666~. When the Critical Fractile line touches our cumulative probability, the brown line then traces down to the number of hot dogs, a number between 58 and 59.
You will notice that this is the same point in Figure C where our incremental line passes from above 0 to below, and is the point of our maximum earnings!
If we know our gain on a sale, and our loss on no sale, along with a probability distribution we are in a position to determine our optimal inventory level merely be following the process leading to Figure D:
a)      Calculate the Critical Fractile per the equation in Figure D,
b)      Determine where this intersects our probability distribution,
c)      Determine the associated inventory level at the intersection point.

Other Considerations
The analysis above did not consider a number of factors that some will encounter. In some cases, our inventory is not perishable, and thus an item that is not sold may be held for another day. In this case we would need to modify our Loss figure to reflect the time value of money and inventory carrying cost associated with holding that for an additional period.
In other cases, a lost sale does not result in a permanent loss, but might only be delayed due to a back-order type process. We can again accommodate this in the gain and loss calculations prior to running our Critical Fractile process.
Finally, sometimes a lost sale results in a greater than one sale loss. Wireless companies often think in these regards. If a customer has a contract, there is some probability they will renew when it is up, and when that contract is up there is some probability they will renew again. Thus, losing a sale today also involves losing some sales next period and the period thereafter. This concept is known as the Customer Lifetime Value, and this would be incorporated into our loss figure for purposes of the Critical Fractile process.

Key Takeaways
The Critical Fractile method is an inventory and production quantity method which incorporates variability in demand, and therefore is somewhat more representative of situations a business faces than more static calculations.
Questions
·         What is your experience in using the Critical Fractile Method?

Add to the discussion with your thoughts, comments, questions and feedback! Please share Treasury Café with others. Thank you!