Thursday, December 29, 2011

Baby...or Bathwater?

Envision the following scenario:

One of your risk professionals comes up to you and says “We should purchase this analytic system because it successfully predicted the downfall of the financial institutions that failed during 2008, it’s great!”
How should we respond?

To answer this question, we need to take a small detour into the wonderful world of statistics.

Who is Null and Why Did He or She Have a Hypothesis?

Statistical comparisons involve two sets of data, the “control” group and the “treatment” group. There are two possible relationships between these two groups - either there is not anything conclusive to show that they are really that different (the Null Hypothesis) or there is (the Alternative Hypothesis).

The person doing the investigating is usually looking to provide evidence for the Alternative Hypothesis. Alternative Hypothesis’ are based on questions such as: Does this drug work? Are guilty people sent to jail? Does this model adequately predict firms’ financial ruin?
The answer to this is determined using statistical techniques. Two outcomes are possible, 1) we “fail to reject the Null hypothesis”, meaning there is no evidence to support that the two groups are not the same, or 2) we “reject the Null hypothesis”, meaning the two groups are significantly different.
Poor Mr. or Ms. Null!

Imagine proposing to someone that way – nice dinner, get down on your knee, bring out the ring box, and say “I fail to reject you as a spouse”! Pretty romantic, isn’t it?

Why the Funny Terminology?

I am sure some know the technical reason for this (if so, please leave a comment!), but the thing I think of (maybe because it is easier to remember) is Nassim Taleeb’s discussion in “The Black Swan”, which went something like the following:
We can count 10,000 swans, or as many as we have ever seen (if more), and they can all be white, but that does not prove “all swans are white”. It just means the ones we have observed are.
However, we can count 10 swans, or as few as 2, and if one of them is black, that does prove that “not all swans are white”.
One thing is provable, one is not. Thus the funky terminology about “rejecting or failing to reject” the Null hypothesis.

Enter Reality

So we have two statistical outcomes of the data based on the Null Hypothesis, and two sets of data in real life that may or may not be different.

Any consultant knows that this should become a 2x2 matrix! This matrix will have one axis cover the statistical conclusions regarding the data and the other axis describe the actual reality of the data.

In two of the four boxes, reality matches the conclusion, and in the other two it does not.
The Romance Continues - Adding Error to Rejection

Statisticians have come up with some great terms for the other two boxes in this matrix. It is with great pride that I present to you these two inspired terms:
·        Type I error
·        Type II error
We can all see why they aware PhD’s for this kind of stuff!

Our matrix is now complete, as follows:

We Now Conclude Our Detour and Return to Our Scenario
Going back to our original scenario, we have been presented with this fantastic but costly software that is able to predict with 100% accuracy a firms’ financial failure.  

The question that we need to ask is “what is the full set of prediction data this model generated?”, and how does this compare to the alternative?

In the actual outcome of the above, this financial model predicted that over 30 financial institutions were going to fail! In other words, it threw out the baby with the bathwater!
Key Takeaways

While we do not want to “throw out the baby with the bathwater”, nor do we want to “keep the bathwater with the baby!”

·         What has been your experience with Type I and Type II error
·         What has been your experience regarding the omission of one of the two?

Add to the discussion with your thoughts, comments, questions and feedback! Please share Treasury Café with others. Thank you!

Tuesday, December 27, 2011

Software as a Service or Installed?

As discussed in prior posts, going with a Treasury Workstation involves a decision between utilizing the Software as a Service delivery model or the Installed model. Each has its strengths and weaknesses.
Googling this topic will get us zillions of hits. A lot of these are written by folks with an ulterior motive. One SaaS vendor had a pros and cons, and in the cons had a bullet that said installed software costs about 23% of the installed price in annual maintenance and licensing fees. I know from experience this is not always the case, at least in the projects I have been involved in.

As you know from prior posts, I am a fan of the process of “triangulation”, which in research mode means answering the question “what are the common and universal themes?” or “what topics show up more than once?” rather than taking any one document at its word.

In a sense it is creating a “best of…” set of information. Where there are many opinions, this process helps to identify what we might be able to really believe, as opposed to running into a “one off” opinion.

Links to six articles I reviewed are at the end of this post. In addition, there are links to several consultants who maintain archives on Treasury Workstation vendors and selection issues.
For the Treasury Workstation, there are two main factors that show up in almost every piece.

Financial Elements
The financial implications are obviously one of the primary drivers in this decision. There are a couple of different perspectives that need to be considered.

Payment Timing – for an installed system, there will be a large initial outlay of cash for the purchase of the license and the installation. The SaaS version has a much lower initial outlay.

Some organizations will appreciate the ability to capitalize and amortize a larger upfront payment, while others might consider an annual operating outlay to be more advantageous.

Ongoing – these costs are associated with annual maintenance and licensing fees. They can vary considerably. In theory, the installed system should be lower than the SaaS, since the installed vendor got an upfront payment, is not hosting anything on their system, and is not directly responsible for the continuing maintenance of the system.
In addition to payments to the software vendors, internal costs will also differ. The resource requirement is going to depend, no matter the choice, on how the system needs to integrate with others within the organization.
Deployment and Integration
The ability of the software to “play well” with the rest of the infrastructure is a primary factor in what the magnitude and scope of deployment and integration will entail.

If the installed route is chosen, it is often due to the fact that a deeper integration with the rest of the technology infrastructure is desired, which can take more internal resources to accomplish.
However, given that the SaaS systems cannot be customized to a great extent, it is also possible that the resources required to modify existing internal systems to work with the external will also be great.

Upgrades – since upgrades for the installed user are optional, a system delivering acceptable functionality can avoid costs associated with adding new or updating features, whereas a SaaS provider will pass those along.
To the extent that customization has occurred, upgrades pose the risk of making those customizations obsolete, so they need to be reconfigured with the upgrade.

Key Takeaways

Selection of an Installed or SaaS version of a Treasury Workstation requires a lot of thought and effort with respect to cash flow differences between the two products, and the organization’s implementation of its’ IT infrastructure strategy. Both forms of Treasury Workstation have their advantages and disadvantages, and it is up to the organization to decide what factors are to receive more merit in the decision than others.

·         If you have recently implemented a Treasury Workstation solution, what were the primary factors that drove the decision between installed or SaaS versions?

Add to the discussion with your thoughts, comments, questions and feedback! Please share Treasury Café with others. Thank you!

Links for the Article

Consultant Links

Tuesday, December 20, 2011

No Way Back

In our recent Treasury Café post we discussed, at a very high level, four different options to handle the cash management needs of the organization – accounting system functionality, spreadsheets, bank provided systems, and Treasury workstations.
The decision to move from a variety of in-house processes to the more complete ERP or Treasury Workstation solutions involves a very critical strategic (to Finance and Treasury) component we best not ignore.

A Business Case is Key
Most organizations, when considering a technology investment, will require the development of a business case.
A business case can be thought of as a mini-business plan. It will contain an overview of the proposed project, explain why that project is needed, detail how the project will be initiated and managed, discuss the organizational impacts that will occur, and demonstrate a financially compelling story, among other things.

The Initial Transition is Valuable…
Why is this business case process so critical?
In a nutshell – economics.
Look at any Treasury Workstation vendor, or any consulting firm who includes in its scope of practice (such as Treasury Strategies and their Treasury 3.0, a subject of prior Treasury Café posts), and the major economic selling point for adopting the technology is the benefit derived by moving from spreadsheets and “off-line” processes to the more seamless, integrated, and accommodating functionality of the proposed technology product.
It is indisputable that there are labor savings.
To give an example, if our cash management staff was originally 10, with the workstation we might be able to use 6, so the 4 FTE savings, because they are perpetual, can go a long way towards paying for the system.

…but the Die is now Cast
So let’s assume that we embark on a project and that we realize all the benefits we had anticipated. This is great! It might go down as the best project in all of mankind!
What is wrong with this picture?
After a period of time, whether it be three years, or five, or ten, suppose we determine that a different system or technology infrastructure is more advantageous?
We might decide this for a number of reasons. Perhaps the vendor’s support for the current system has transitioned from “continuous-improvement updates” to “maintenance-only”. Perhaps we want to move from installed base to SaaS due to our web utilization patterns. Perhaps we want to just stay “ahead of the curve” due to an organizational philosophy.
Our problem is going to be that, unlike the initial transition, moving from one system to another does not yield many tangible, identifiable benefits.
We went from 10 to 6 with our last project. This project might not yield any additional FTE savings, or perhaps at most 1.
In other words, we will incur expense for the project, without any offsetting benefits to pay for it. When the CEO or CFO then asks “what are we going to gain on next year’s P&L by doing this?” the answer will be “none” or “not much”. Not a very persuasive argument, is it?
Given this, the system by which we take out the legacy inefficiencies will be the system we are likely going to continue to use (i.e. be stuck with) for many years to come.

Key Takeaways
Make sure your initial system selection is something you will be able to live with for a good period of time. Moving from one system to another does not yield many dramatic economic benefits like the initial transition.
In other words, there’s no going back and there’s no do-over’s…and “building the bridge to the future” does not always make the most compelling tangible economic argument.

·         Are you managing cash on spreadsheets or a technology product?
·         How long have you been on your current technology product?
·         Is there any momentum for a “next generation” product in your organization? If so, how was this accomplished? If not, why was this plan not deemed appropriate?

Add to the discussion with your thoughts, comments, questions and feedback! Please share Treasury Café with others. Thank you!

Sunday, December 18, 2011

How Long is 1,000 Man Years?

In a recent post on Treasury technology options, I made clear a belief that Finance and Treasury should strive as much as possible to control its technology destiny.

The reason for this is that often when an IT change is desirable, the following reaction occurs amongst the IT folks when the possibility is broached with them:
·       The eyes grow wide and bulge-out slightly
·       Both hands are raised and begin flailing wildly back and forth in the air
·       A variety of half-coherent ramblings are made, but always including the phrase “that will take 1,000 man-years to complete”
·       Coherence returns along with the admonition that this request was not made three years ago, when the current year’s work was being planned
If we are to achieve the best we can, we need to somehow manage this fact of organizational life.

Focus on What You Can Control
My point with this is not to idly complain, because as we all know that does not do a lot of good. What will do a lot of good is if we take responsibility for fulfilling our organization’s needs in the best way we can so that we can achieve our objectives.

By doing this we focus on operating in the realm of things we can control rather than in the realm of things we cannot. Covey wrote about this in his “7 Habits of Highly Successful People” book.

Get Skilled

One very simple thing we can do is to accumulate a reservoir of technology talent, knowledge, and ability within our group. This is advantageous for several reasons.
First, with an understanding of how the technology works, the coding demands of different software elements, the system architecture and its design rationale, and an understanding of our needs, we are able to ask pointed, intelligent questions that help break down the “1,000-man years” perception into a series of component parts.
Projects become a lot less overwhelming when they are “chunked”. Having the ability to discourse intelligently and purposefully about the systems can often bring about a more mild reaction, and inspire “can do” thinking to replace the “can’t do” knee jerk reaction.
Second, technology is being incorporated into the fabric of everyday life in more and more ways.
Five years ago I knew nothing about routers, but now have a home network set-up like half of the others neighborhood. We can control our furnaces from smart phones. We get our grocery coupons on-line.
Get Operational
Technology knowledge helps us to be able to do more things for ourselves.

We can program routines using Treasury Workstation functionality. We can program Excel using Visual Basic for Applications in order to make information input simple while providing a seamless translation for upload into the workstation.
From our technical knowledge we are able to create systems and environments that interact with each other seamlessly, much as a great basketball team passes the ball around to move down the court for the open shot.
In addition to being in control of our daily life, there is one additional advantage. It earns the respect of the IT folks. They understand the work we have done, the investment of time and energy we have put in, and due to that we have earned the right to make some requests and expect them to be honored.
Get Graded
In the Steve Jobs biography by Isaacson, there is a passage that states something to the effect that A players seek the company of A players, while B players seek the company of C players.
By virtue of the work we have done by ourselves, on our own, to the best of our ability, pushing our knowledge to its limits and seeking to learn more, we prove we are A players. Due to this, we are more likely to attract the A players of IT.
The A players of IT are not the ones who go through the “1,000 man-year” dance. They are the ones who say – “Yeah, there’s a way we can do that!” Working with these folks is a joy and pleasure.
Key Takeaways
Our usage of technology will grow each and every year, as we have previously discussed in WikiFinance, WikiTreasury. In order to stay ahead of the trends, and work at our best with the rest of the organization, we need to:
·       Continually learn new technologies, understand how they work and how to make them to interact
·       Develop a level of technological sufficiency to perform tasks for ourselves (increasing the zone of control)
·       Communicate intelligently with our IT specialists, creating a level playing field of mutual respect
·       Create a compelling and challenging environment that attracts the A players

·       What current technology learning opportunities are available in your current organization?
·       What is the likely route of next generation technology that you need to “get up to speed” on?
·       What is the status of your relationship with the IT folks? Is it respectful?
·       What parts of the process can you perform in order to reduce the “1,000 man-years” reaction?

Add to the discussion with your thoughts, comments, questions and feedback! Please share Treasury Café with others. Thank you!

Thursday, December 15, 2011

I’ve Been Workin' at the Station, All the Live Long Day

Today I had the privilege to be the guest panelist for the Corporate Executive Board’s Treasury Leadership Roundtable Cash Management Cohort webinar on “Trends in Treasury Technology”.
These kinds of events are always fun for me because I can deliver some of Treasury Café’s insights to a new audience in a different setting. As an added bonus, the folks at the Treasury Leadership Roundtable do most of the “heavy lifting” - I just have to show up!
However, since my short-term memory is notoriously fickle, I decided it best to cover that event now while things are still fresh in my mind.

Choices, Choices
For cash management and related roles, a company’s treasury function will utilize either a) the organization’s accounting system functionality, b) traditional spreadsheets and other generic office tools, c) bank supplied functionality, or d) specialized software systems referred to as Treasury Workstations (hence the title of this blog post).
There are pros and cons to each alternative.

Option A – Jump In With the Accountants
The accounting system functionality (and by accounting system I am referring to SAP, Oracle/Peoplesoft and others) is somewhat limited compared to the specialized alternatives. It is also more cumbersome. It can also be quite costly.
But the primary drawback in my mind to this is that the Finance and Treasury group is beholden to the IT department. And the IT is much more interested in serving the accountants than the Finance and Treasury folks. 
It all makes sense, though. It is difficult to see how it can be different. The accounting system is typically of primary concern to accountants, not finance folks (see Why You Need a Finance Person for some of the differences).
Accountants in an organization outnumber finance folks by large orders of magnitude - 5 to 1, 10 to 1, 20 to 1.
CFO’s of organizations usually emanate from the accounting fold, which makes that group their favorite, and the source of their technical chops.
There are a lot of Sarbanes-Oxley (SOX) requirements for accounting, and since CFO’s are on the line personally when certifying the validity of the accounting results, these issues get a lot of C-level attention.
Let’s recap - the masses like it and the powerful elite like it.
Now imagine being the IT person assigned to the accounting system. You have two requests, one from accounting and one from finance. Whose request do you act on first, and whose do you postpone until the tail-end of the three-year planning horizon?
Or when it is budget time, and you need to give up 5% of the planned spend to hit the stretch targets, do you take it out of the accounting projects or the finance projects?
In other words, if the organization is going with option “a”, Finance and Treasury will get the standard, delivered system functionality and nothing more…ever!
They might not tell us this outright. What usually happens is that any request is met by a lot of arm-waving and concerns that it will take 1,000 man-years to accomplish the request.

Option B – Jump In With Bill Gates
Option “b” involves using standard office software to manage cash. This has the advantage of being versatile and completely under Finance and Treasury’s control, thereby freeing us from IT’s 1,000 man-years constraint.
For simple cash management set-ups it can be the most viable alternative. The basic usage of these tools are some of the more familiar to most employees, so they can all participate without any significant training periods or change management efforts.
Most software vendors earn their living by maligning use of spreadsheets due to the risk of error and the somewhat more manual nature of the processes to use them.
These disadvantages can be eliminated, but generally require programming solutions in Visual Basic for Applications. The software language skills might not be possessed by all of the Finance and Treasury staff.
Option C – Jump In With Your Bank
Option “c” – bank systems - I do not have any personal familiarity with, but am told that these are useful for very basic functions but leave a lot to be desired as the needs of the organization become more complicated.
And if we have more than one bank, we end up using a lot of different systems to do the same thing, which can be inefficient as well.
In addition, there is a great loss of flexibility, as usage of one bank or another becomes an organizational process and procedure issue in addition to bank services.

Option D – Jump In With People Who Want to Do Business With You
Option “d” – Treasury Workstations – is the route that delivers to Finance and Treasury the most functionality in an efficient package, overcoming the drawbacks to options “b” or “c”, and usually priced much more attractively than option “a”, even if we do not include the added benefit that IT is taken out of the picture for the most part.
Workstations these days come in two different flavorsinstalled software on the organization’s infrastructure, and the Software as a Service (SaaS) variety, where the software is hosted on the vendor’s servers and is accessed via web portals. Each of these approaches has their pros and cons.
The installed model plays better with the organization’s infrastructure and is more “self-sufficient”. The software will still work the day after the vendor goes out of business. Interfaces between programs are internally constructed which also allows greater organizational control, though if we need IT to perform some of this function we run back into the 1,000 man-year problem. It is also more costly.
The SaaS model is less expensive because it is subscription based as opposed to license based, and the vendor can leverage its scale to innovate and include features more effectively. For example, interfacing with Bank A’s system will be less costly for a SaaS provider because they can do it once, and spread the cost of this effort over the x number of subscribers to the software. Under the installed option all the interface programming costs are borne by each individual company.
There is risk here, however. If the company shuts down its business overnight we are in a major lurch. Years and years of data might be lost forever. An emergency situation is created, which is likely to occur at the most inopportune of times.

We Need WikiFinance, WikiTreasury skills!
The technology alternatives that we can use to manage cash and other Treasury functions is one of the primary elements to the Wikifinance, WikiTreasury environment we are moving towards.
For this reason, and others, it helps immensely to have a set of IT skills in the Finance and Treasury group. This was one of the slides in the Treasury Leadership Roundtable presentation, the ideal Treasury person as a mix of both finance knowledge and capabilities combined with technological knowledge of infrastructure, interfaces, and programming.
Let’s be clear - no software is ever going to provide 100% of the needed functionality. There is always going to be a certain amount of customization. This can take place within the Workstation software (if it is installed), the interface process, or through “off-line” activities.
To the extent that our own internal people understand the processes, the architecture, and the interfaces, and have some modicum of programming capabilities, we are able to utilize the product to the best of our ability while simultaneously making it work within our organization’s infrastructure with the minimum amount of added inefficiency.
And we can do it without relying on IT and the subsequent 1,000 man-year problem.

Key Takeaways
There are several choices to be made when determining the technology direction of the Treasury and Finance area. Each choice has advantages and disadvantages.
In order to maximize the Treasury and Finance area’s impact, key technology capabilities, such as programming, architecture, and interface, need to be held within the group. This cannot be outsourced without critical loss of control!

·         Which of the options are you currently using to manage your cash and treasury functions?
·         What is the status of IT skills housed within the Treasury and Finance area?
·         What steps can you take today to enhance your capabilities?
·         Do you have a roadmap for the future?

Add to the discussion with your thoughts, comments, questions and feedback! Please share Treasury Café with others. Thank you!

Thursday, December 8, 2011

8 Steps to Monte-Carlo Insight

I attended a day-long symposium several years back where a portion of the agenda was devoted to risk management. The primary speaker’s company had a policy of hedging 100% of their exposure to a raw material, and they executed this by locking in prices one year in advance.
How different is this to making spot purchases, only one year in advance? Is this really hedging? It seems like putting all the “year-ahead eggs in one basket” would produce similar volatility. Wouldn’t a dollar-cost average or roll-in strategy be better?
The great thing about today’s age is that we can answer these questions using the wonders of technology!

Can We Test It? Yes We Can!
In “Blogful of Thanks”, I mentioned Lee Crumbaugh and his focus on scenario analysis. “What is the weather going to be like today?”. “Hmm, that makes me think of scenario analysis” “What do you think of Sears threatening to move out of Illinois because the state is not going to provide tax breaks?” “Hmmm, my mind immediately jumps to scenario analysis
I am really no different than Lee, except we need to replace “scenario analysis” with “Monte-Carlo analysis”. Let me tell you, a lot of things make me want to do a Monte-Carlo analysis!
If we think about it, the two are really not that different. A Monte-Carlo analysis is merely several thousand scenarios played out in rapid succession.
So let’s do a Monte-Carlo analysis!

Step 1 – Define the Model
For many interest rate, currency, and commodity products, prices tend to “mean revert”. This term is “statistical-ese” for the fact that when prices are too high, they tend to move lower, and when prices are too low, they tend to move higher. In other words, they are more likely to (but not guaranteed to) move towards the mean.
This can be described by the following simple mean reversion equation:

The alpha term (α) is the mean-reversion parameter. Mu (µ)is the long-run mean. Sigma (σ) is the standard deviation, and ϵ is the error term ( a random variable from a defined distribution).
Step 2 – Populate the Parameters
One of the features of many interest rate, currency, and commodity markets is that volatility decreases as we “move out the forward curve”. To see this, look at US corn at two different time periods:

The price for the “near” months is widely different, yet for the “out” months is pretty much the same. For some interest rate markets, the near month volatility can be twice that of the out months.
Often, determining the parameters of the model will involve statistical analysis of historic data (this can be a blog post onto itself, so I will not go into it here).
For the sake of this analysis, we are going to populate our corn price curves assuming that the long-run mean is $5.00.
We will set the alpha (i.e. the speed of mean reversion) parameter at .02 for the near months, and increase it gradually in .005 increments. This has the effect of making the out months’ center on the mean more quickly than the near months, consistent with the graph we have just seen.
We will set the standard deviation at .2 and gradually decrease it, again to capture the higher price movements of the near months vs. the out months phenomenon.
We will “draw” (i.e. simulate) our error term from a normal distribution with a 0 mean and 1 standard deviation.
Finally, since each month’s price for corn will to some extent reflect underlying fundamentals that affect every month’s price, we use a correlation matrix and establish correlations through our price curve using a Cholesky decomposition of the correlation matrix. Correlations for this model were .98 for neighbor months and decreasing by .02 for each month further.
These then are the assumptions through the 12 month forward curve:

Step 3 – Identify the Comparisons
In this step, we decide what we are going to model (now that we have a model!).
In this analysis, we are going to look at three different alternatives:
·            No Hedge - 12 units purchased at the prompt month price
·            Year-Ahead Hedge – 12 units purchased at the 12-month forward price
·            Roll-In Hedge – one unit purchased each month for 12 months (i.e. one 12-month purchased 12 months ahead, one 11-month purchased 11 months ahead, etc.)
Step 4 – Push the Button!
Run the Monte Carlo! In this analysis we use 10,000 runs (determining how many runs to use is also a blog post in itself). I ran this in Excel and downloaded the results into R.

Step 5 – Assess the Results
The results of this analysis are as follows:

The lowest average price was the “Year-Ahead Hedge”, followed by the “Roll-In Hedge”. Spot was the highest average price, as well as the highest standard deviation. Graphically (brown is "No-Hedge", red is "Roll-In", and orange is "Year-Ahead"):

Based on this, the symposium presenter was correct in using a “Year-Ahead Hedge” strategy, if (and that’s a big if) prices move as modeled.

Step 6 – Interpret the Results
This result disappointed me because I thought the “Roll-In Hedge” strategy would be better, since it averages in different prices as opposed to the an “all eggs in one basket” strategy. So does it make sense that the opposite would be the case? Why?
The reason I propose is that the “Year-Ahead Hedge”, because it transacts at the highest point of the curve in terms of mean-reversion, and the lowest point of the curve in terms of volatility, is able to overcome the averaging benefit of the “Roll-In Hedge”.

Step 7 – Question the Results
If this is indeed the reason, we should be able to “question the data” or “question the model” to refute or confirm our conclusions.
If the rate of mean reversion and volatility made a difference, we should be able to eliminate the hedge effectiveness by making both of these factors constant across all 12 forward month periods.
This has the following results:

As we can see, there is not a difference in any of the hedges.
If rate of mean reversion and volatility make a difference, then we should also be able to put them in a different place and by doing so generate different results.
To do this, we put the highest mean reversion rate and the lowest volatility at month 6, and then increased gradually on both sides of the forward curve (i.e. towards month 1 and month 12).  If we looked at a graph of these we would see a big “U”.
Under this scenario, the “Roll-In Hedge” does better than either alternative:

The “Roll-In Hedge” does better under this because it is the only hedge scenario which takes advantage of the low volatility / high mean reverting period 6 (and to a slightly lesser extent periods 7 and 8, 6 and 9, etc.). The other two strategies are at the high end of volatility and low end of mean reversion.
These results enhance our initial conclusion, and allow us to generalize, “it is better to concentrate hedging activity in low volatility months and/or higher mean-reverting time periods”.

Step 8 – Identify Further Areas of Research
From Step 7 we have decided that concentrating hedge instruments in low volatility / high mean reverting periods produces the most optimal results.
One area of interest might be to conduct a break even analysis of the parameters. This would address questions such as “By how much does volatility need to be increased at the tail end for hedge improvement to significantly disappear?” or “What mean-reversion level accomplishes a similar effect?”
Another set of questions might surround hedging volumes – “how different are the results of purchases are seasonal rather than constant throughout the year?” or “What level of base load capacity do we need vs. contingent capacity to make this strategy likely to be effective?”

Key Takeaways
A Monte-Carlo analysis of major risks can provide valuable insights into your risk management strategies and processes. In addition, it can provide indicators as to the true value drivers of your organization’s risk and allows you to make adjustments in real-time.
A Monte-Carlo analysis can also provide questions that either allows you to feel more confident in your results, or point to areas where further research is warranted.

·         Have you performed a Monte-Carlo analysis in the past month?
·         What insights were learned through this analysis?
·         What strategies were undertaken?
·         What critical value drivers are you paying attention to as a result?
·         What are the unanswered questions to pursue in the next round of studies?

Add to the discussion with your thoughts, comments, questions and feedback! Please share Treasury Café with others. Thank you!