"As you know Mei the company decided to move from a DB plan to a DC plan...even though it was hotly debated." Aisha, VP of Human Resources, told Mei as they walked from the cafeteria. "However, now that its done we need to manage the transition and understand the implications. Can you help us with that? I know you had mentioned Monte Carlo simulation before"
"Absolutely!" Mei replied. For quite awhile she had thought such an approach might help quantify impacts of such a change.
"There are a still a lot of questions" Aisha continued. "How will the company go about funding this change in benefit? And what does this change mean for employees compared to where they are currently?"
"Those are great questions" Mei replied. "It is a complicated topic..."
"It is!" Aisha emphatically agreed. "That's why I am trying to get as much clarity as possible."
"Well, I did something similar a while back, and so half the work is already done." Mei said. "Let's set up a time to talk tomorrow...I should have something for you by then."
"OK, that'll be great!" Aisha responded, a grin slowly appearing on her face. "Of course, I could always have my people get a hold of your people..."
Mei smiled. "I think it might be quicker if we just say 2 o'clock!" They both laughed.
"OK, two it is!"
Retirement planning (any financial planning for that matter) can be a difficult undertaking.
For all the reasons we have enumerated before (for example "Why Your Cash Flow Forecast Will Always Be Wrong") forecasting the future is impossible to do. There are simply too many unknowns.
One of the advantages to a technique like Monte Carlo simulation is that it can let you glimpse some of the possible outcomes the future may hold - good or bad.
While our last post - Pension Potholes: Mei and the Curse of Exponentiation - focused on the accounting side of retirement plans, in this post we will look at the economics to both the company and the individual.
There are a number of fundamental differences to be considered when we make a comparison between a Defined Benefit plan (DB) and Defined Contribution plan (DC).
One of these is the investment portfolio. In a DB plan, the company assumes responsibility for managing the portfolio (usually with the assistance of consultants and investment managers). In most DC plans, the individual directs the investments.
A company, with many employees, some new and some near retirement, enjoys the benefits of the the pooling principle when constructing its investment portfolio. The portfolio can remain relatively constant through time, because in any given year only a small fraction of the assets are needed to pay out benefits.
For the individual employee, the makeup of the investment portfolio will change over time. Why? Because the closer one gets to needing the funds, the less risk one can take with them. If we are going to need $100 next year, investing them in an asset that could decline by 50% over that time is not wise, as we run the risk of only having $50 when we needed $100.
On the other hand, if we don't need that $100 for 40 years, a 50% blip...or two...or three, won't matter much as there is a lot of time for recovery and high returns to make up for it.
Because of this, a typical investment pattern through a person's lifetime begins with a large portion of higher return / higher risk securities like stocks. As time goes on, these securities are gradually replaced by lower risk (and lower return) securities like bonds.
Figure A shows how this mix changes over time. Because of the downward slope of the line, it is sometime referred to as the investment or portfolio "glidepath".
Another major difference between the two plans is who bears the burden of uncertainty.
An employee who has saved $1 million has saved too much if they die the following year, while another will have saved too little if they live to be 103. The answer to the question "have I saved enough?" is unknown until life has played out.
In a DB plan this uncertainty is borne by the pension sponsor. With many participants, this risk is tempered by the the pooling principle. Some will grow old while others will die young, with the result that it tends to average out from year to year, and also between years.
Another source of uncertainty is counterparty risk, and here the DC participant has the advantage. The employee receives a contribution every year and can invest it at their own direction. They can ensure that they diversify their counterparty risk by investing in a number of different funds from a number of different management companies. Should Vanguard go under, for instance, only a portion of their assets will become ensnared in the quagmire.
The DB participant relies on the employer. Should the employer go under, support of the plan will cease, and the future of any benefit payments becomes questionable. There are 'backstops' in the US, such as the Pension Benefit Guaranty Corporation (PBGC), but these are imperfect. For one, the PBGC only guarantees a minimum amount of the benefits, so the remainder will be lost. In addition, being a Federal agency, its future is always a bit uncertain as well, subject to the vagaries of dysfunctional political processes. The US seems to struggle these days with what it can afford and what it is willing to pay for, so there is always the danger that the PBGC goes the same way itself as those it has had to guarantee.
Mei stopped by the office of Enrique, who worked in HR.
"Hi Enrique, I'm helping Aisha with a project and wondered if I could pick your brain for a moment?"
"Sure", said Enríque "What do you need?"
"With the company moving to a DC plan, we want to understand the financial implications in order to design things in the best way. Part of that involves looking at how we may make funding contributions to the retirement plans."
"I'm not sure how much help I can be for financial items...you're the company's recognized expert at that Mei!" exclaimed Enrique.
Mei blushed. She tried hard to do great work and was always happy to hear that she had been successful, but in her mind that was simply attempting to be the best she could be and giving the company all she got. Doesn't everyone do that?
"Don't sell yourself short, Enrique", she replied. "One perspective we are going to take is to look at how funding a single employee through their life time changes based on the change in plan - so I was hoping you could tell me how we might simulate that process."
"Well, under the DB plan an employee would get a pension payout for as long as they lived based on a formula of 2% of their final salary times the number of years of service"
"Hmm...so when an employee starts, we need to know a way to project their final salary. What's the best way to do that, Enrique?" she queried.
"Well, generally we try to make sure it increases at least at the rate of inflation. Right now that seems to be around 2 1/2%"
Mei's brow furrowed as she processed what Enrique had said. "So.." she began, "If I have a beginning and ending year, and their initial salary, I can use that rate to get to the final value. Does someone starting at age 25 and working until retirement at age 65 seem reasonable to you? And if so, what starting salary should I use?"
"That obviously depends on the position and role, but I think $50,000 is a safe assumption."
Mei smiled. "OK, I think we got what I need." She looked directly into his eyes "Thank you for your help Enrique"
"Sure thing Mei, you've done the same for me before"
"Oh, I almost forgot, what are your kids going to be for Halloween?"
It was Enrique's turn to smile. "One is going to be a magician, with a black top hat and all that, and our youngest is going to be a big pumpkin! They're both so psyched!"
Prior to funding a commitment, we need to know what the value of that commitment is - the end goal. For a pension obligation, this is the payout the employee will receive over their life expectency based on a 'payout formula'.
Using the one Mei discussed, Figure B shows the calculation of the final salary and the annual payment to be received.
Figure B also calculates - assuming 25 years of retirement (being paid at the beginning of each year), and using a 5% rate of return (the current expected return on long-term quality bonds for the sake of this post) - the net present value of these payments at the time of retirement.
With the calculation in Figure B, we now know what needs to be funded. The next question we need to ask is how do we get to this $1,629,144.90 number?
We can adopt a number of different strategies:
- ::Fund the entire obligation up front
- ::Fund through time in direct proportion to service cost
- ::Fund through time at a constant percentage of salary
- ::Fund through time as a "levelized" payment
- ::Fund through time at a "target percentage"
Using some of the worksheets from the last Treasury Cafe post ("Pension Potholes: Mei and the Curse of Exponentiation"), Figure C shows the annual funding amounts required under each scenario during the employee's career. From this graph "The Curse of Exponentiation" is apparent, as the approaches that start low increase over time at an increasing rate by the end of the employment term.
Note that the percentage of salary method is essentially equivalent to the DC plan approach, since in most cases employers contribute (or in some cases "match" a 401k contribution) to employee's retirement plans on this basis.
Now that we know what the obligation is and alternative ways to fund it, we next need to consider how these funds will be invested prior to the time that they need to be paid out.
Funds that have been set aside in order to satisfy a target obligation are not shoved under a mattress. They are invested in order to earn a return until the time they are needed.
For the simulation in this post we will assume that the company is able to invest in 4 asset classes: short-term fixed income, long-term fixed income, equity, and alternatives (e.g. hedge funds, private equity, venture capital, etc.).
For the DC alternative, we will assume the short-term fixed income, long-term fixed income and equity are the sole options, since individual investors do not generally have access to good alternatives classes unless they are already quite "well off".
Figure D shows R output (R is an open source statistical software) for each of our asset classes expected return and standard deviation, while Figure E shows the correlation between these securities.
As we mentioned earlier, individuals will need to follow a "glidepath" approach to their investment portfolio while the company will be able to keep its portfolio constant through time. How will we model this glidepath process?
In a prior post ("Should You Rebalance Your Investment Portfolio?") we used an "efficient frontier" approach using a minimum variance target to determine the asset make-up of the portfolio. We can use this concept for this problem by assuming that the DB plan maintains a consistent variance target while the DC participant uses a decreasing target variance over time. The investment firm BlackRock identifies a variance glidepath in this research article which we can use as a guide.
Figure F shows a comparison between the variance targets of the DB and DC plans.
We can also use an assumption based on target asset allocation percentages. For the DB plan case, we will assume a 50% equity, 40% LT fixed income, and 10% alternative investment composition (these percentages are 'in the ballpark' for corporate pension plans in the US). For the DC case, we will assume that the portion of equity begins at 90% and is lowered to 40% as time progresses. The percentages used are an appoximation based on those found in Vanguard's Target Date Funds. We will also assume the fixed income component shifts from long-term to short-term proportionally once the person reaches 60 until 90.
Figure G shows a comparison between the equity and alternatives targets of the DB and DC plans.
We now have two different approaches to determine the portfolio composition. Why don't we figure out which one is "best" and simply use that one?
There are several issues to consider.
First, a lot of the data we use comes from historical market performance, which contains a lot of "noise" - correlations do not stay constant over time, volatility is itself volatile, theories about how the market works vary, etc. Using one approach could be akin to the old story about the person building their house on the sand.
Using two different approaches let's us deploy the triangulation process (see "Does Your CFO Got What it Takes?" for a description of the triangulation process) in our analysis. The more a conclusion shows up in the different methodologies, the more confident we can be in it. The more varied the results, the less definitive are the conclusions that can be drawn.
In addition, it forces us to reconcile the results of the different methods. This reconciliation process can identify inadvertent mistakes in the simulation set-up, such as coding or math errors. But even better, it creates additional data which we can analyze allowing for deeper, more granular insights than if we used just one method.
"The thing with stock prices is that they are lognormal" Biff Tarplin told Mei over the phone.
Biff was the company's primary consultant for the pension investment portfolio. "But to model them you generally need to use normal factors, since most statistical software calculations, from Excel to R, use these in the calculation rather than lognormal variables" he continued.
"So Biff, am I correct in saying that if I have a return calculated from the stock price, such as 8%, then that return is lognormal and needs to be converted to normal in order to arrive at the correct distribution?"
"You're spot on, Mei. That's exactly what you need to do. But also keep in mind that in order to tie amounts through time you want to focus on the geometric average, not the arithmatic one."
"I remember something about that, the geometric will be lower due to the ups and downs in any given year?"
"Exactly, the focus is on final values, not annual averages."
"Thanks Biff, this was helpful" Mei sometimes wondered what Biff's professional life was like, breezing in and out of different companies on a regular basis. The variety was probably profoundly interesting, but she suspected that all the fun work was performed elsewhere in the firm and not by Biff himself.
Mei turned to her computer. Since the information in Figure D was lognormal, she would need to convert these to normal parameters.
For this process she used a method using the equations shown in Figure H. She used the normal parameters (which are the natural logarithm values of the Figure D parameters), added these to get a normally distributed ending stock price, and then took the exponential value of this to represent the ending stock price one would see quoted in a daily paper or internet site.
In order to calibrate this procedure, she ran 10,000 simulations. The results shown in Figure I compare the Monte-Carlo output with the theoretical expectations, which are all quite close. This indicated that the model was performing the calculations correctly.
So far so good.
A Monte Carlo analysis requires that we program a series of steps. We might do this by writing code in R, or VBA in Excel, or use software where some of these steps have been programmed for us by the developer.
No matter the source, if we are to rely on the results of the analysis, it is important to create tests that ensure that the programs we have created are working right.
Figure I is therefore a critical step in the process.
Another way to validate our model is to examine items visually. For example, Figure J compares the median stock price simulated for each year of the 100 year period, and compares it to the theoretical value using the methodology from page 300 of John Hull's book "Options, Futures, and Other Derivatives".
Figure K compares the mean values of the ending years' stock prices with those that would theoretically occur using the normal to lognormal conversion equations from the Treasury Cafe post Simulating ROI (Return on Investment): or What's So Normal About Logs?.
Figures J and K provide an indication as to the magnitude of differences we encounter when dealing with lognormal distributions. In Figure J, the y-axis goes to about 2000, indicating that in 100 years the median value is around that number. The mean value, the subject of Figure K, is almost 10000, or close to 5 times the median. Since the median and the mean are both termed "averages", yet significantly different, we need to be clear what is meant when that term is used.
At this point we can feel pretty comfortable that our simulation of a stock price series has been programmed correctly.
Since we need to simulate 4 separate price series, we will need to replicate this base methodology, but with the added wrinkle that the price series need to exhibit the correlation patterns shown in Figure E.
We can ensure that this occurs by using the Cholesky Decomposition.
What the Cholesky process does is ensures that the 4 random numbers impact each of the 4 asset classes in a manner that allows the correlation to be maintained (note: I thought I had gone through an explanation of the Cholesky process in prior blog posts but found I have not - once I actually write what I thought I had already wrote I will put the link here!).
Under the "make sure our model works as intended" objective, we need to compare the returns and correlations that have been simulated to the ones we intend to simulate. Figure L shows the simulated returns for all the equity classes, on both a normal and lognormal basis. Figure M shows the correlations, which are very close to the ones shown in Figure E.
"What are the pieces of information that you really want to see?" Mei asked Aisha over the phone. "I want to make sure the Monte Carlo captures the critical items"
There were a few moments of silence as Aisha thought through her answer. "Well, from the company's point of view, I think we want to know what is the outflow of cash under the new approach vs. the old one."
Mei waited as Aisha thought some more.
Aisha continued "And from the employee's perspective, we know that under the DB plan they receive a payment for as long as they live. As an employee I would want to know what the chances are that I do not have enough funds for retirement. Does that make sense?"
It was Mei's turn to pause and process. "Yes, that does. There may be other items of interest, such as earnings impacts, that might be considered, but this would require another layer of analysis which would take some time."
"Well let's hold off on that then, Mei. We can get to that in another round, right?"
"Oh yes. It's just a tricky calculation, with things amortizing in and out, that requires a lot of additional variables that need to be set up and tested. It's really a timing thing, and I know we are meeting tomorrow..."
"Absolutley, let's do this one step at a time. If we have some information, it might help us to figure out what the next direction is without going down blind alleys."
"Good point. See you tomorrow at 2!"
A Monte Carlo analysis can be used for many things. As the dialog above suggests, it is important to define the questions we want answered.
We can go down a lot of rabbit holes otherwise.
The reason for this goes back to the programming element we discussed in the last section. The more information we want to capture, the more programming will be required to do it...meaning more time will need to be spent.
Thus, if we have deadlines, we need to make sure we get the information that is required while preserving the ability to get more later if we so choose.
In addition, like a lot of business issues, the process will be iterative. Do not think that our Monte Carlo analysis is a "one and done" thing. The information generated will spur additional questions. Distributing the information to a wider group of people will spur additional questions. As we go through the process of setting up the analysis it will spur additional questions.
Some of these questions may warrant further investigation, but that need will evolve over time and cannot be immediately known up front.
That being said, to the extent it is not too cumbersome we should capture the information to answer questions we may already suspect will come up. Having an immediate answer enhances the confidence people will have in the results.
And this is especially important for a Monte Carlo analysis, because most do not trust a "black box". Answering questions, and showing this in an understandable way, is one way to remove some of the mystery surrounding the process.
For Mei, the critical questions are:
- ::How much cash will the company need to contribute?
- ::How likely is it that employees will not have enough funds for retirement?
With these ends in mind, we can begin generating the data we require.
Using the asset price model she had developed, Mei turned to the task of modeling the investments in those assets.
Based on the allocation methods derived from the data underlying Figures F and G, she taught the program how to allocate funds in the correct proportion, and then programmed the calculation of the results.
All systems go. She pushed the button on her computer. The indicator on the machine told her the process was underway.
She knew it would take a little while.
"Might as well get something to eat while I can" she thought, as she pushed the chair away from her desk.
The beginning of the simulation can often be the most anti-climactic moment of the whole proess.
The program is in the process of doing everything we have told it to do, and the results are the only item that remains.
Depending on the complexity of the calculations and the number of iterations, we can wait anywhere from a minute to a day for the results to be made available to us.
Yet pushing the button does not signify the ending of our task. We will have spent a lot of time getting to this point in the process, but it is not over yet. What remains is for us to get the results, examine them, understand what has occurred, and present them to others in an understandable way.
We ain't out of the woods yet!
Mei was happy to see upon her return that the processing was complete. With eager anticipation she began to investigate the outcomes of the procedure.
She decided first to look at funding the plan from the company's point of view using the 40-50-10 investment allocation, and made a graph of the distribution of outcomes at the point of the employee's retirement comparing three of the funding alternatives: up front, percentage of salary, and levelized. This is shown in Figure N. The statistics themselves are shown in Figure O.
All of the strategies had a mean average that was higher than the target (i.e. a little over $1.6 million), making them all potentially viable alternatives.
Mei noticed that the distributions were such that one could almost imagine putting their hand on the green line and squishing it down, with the end result being the gold line - lower 'hump' and more extreme values. This is due to the standard deviation of the Up Front funding strategy being larger than the other strategies.
"Interesting," she thought. "There seems to be a relationship between standard deviation and the amount of time the investments were made. Since the Up Front strategy is funded 100% at the start, it had the longest time in the market."
Mei calculated the weighted average time an investment was in the market using the formula in Figure P. She then used this to divide each of the strategies standard deviations, with the results shown in Figure Q.
This showed that the standard deviation of each strategy was essentially the same once the number of years in the market was taken into account.
Clearly there were a number of factors at play here. The company could fund all at once, be done with it, but would then face more volatile end results. The more it funded through time, the less volatile the results in the plan. In theory, the volatility associated with this return would simply occur outside of the pension portfolio rather than in it.
Mei then turned her attention to the DC style contribution and the difference with the DB. She had modeled each of the funding scenarios, though the only practical option was the Percentage of Salary method.
The major difference between these two scenarios was the glidepath that individual investors would execute (as shown in Figure F), and differences in the portfolio value at the time of retirement would be determined primarily by this difference.
"Interesting", she thought. "The mean is higher for the DC than for the DB, but so is the standard deviation. The medians, on the other hand, are about the same. What does this mean?"
She reflected further. "Since standard deviation is higher, this will push the mean further out, just as was shown in Figure N."
"Why would standard deviation be higher?" she mused. "Well, since there is a 90% allocation to equities the first 16 years, compared to a lower amount for the corporate plan, this could be a reasonable explanation."
In order to test this explanation, she compared the change in the standard deviation as a percentage of mean value for the DB and DC investment allocations, with the results in Figure S showing that the rate of change does become lower as the asset allocation percentages were changed through time.
Mei turned her attention to comparing the DB plan to the new DC plan. The distributions were very similar, with only slight variations, as shown in Figure T.
She again paused to think about this conclusion. The amount of funds in each was the same. She had calculated the required percentage of salary analytically by using the equation shown in Figure U. The major news would be if there were a major difference. Since there wasn't, this provided greater confidence in the results she was examining.
In addition to the distribution of the ending value, she directly calculated the percentage of times the ending portfolio value at the time of retirement was lower than the target, and the percentage of times the ending value at age 90 was less than zero. This table is shown in Figure V.
Between the two scenarios - allocating the DB and DC investments in a strategy where contributions were made as a percentage of salary, the DB had about a 37.5% chance of being underfunded, while the DC had a 39.5% probability - at about 2% not dramatically different.
By the time the retiree reached age 90, this difference had tripled to about 6%, due to the more conservative investment allocation of the retiree compared to the company.
Mei then turned her attention to the differences of portfolio composition within each strategy. In the first DB pass, a fairly standard 0-40-50-10 allocation was made. In the second DB pass, the portfolio allocation was determined by targeting a variance level and optimizing the highest return that would maintain that level.
Figure W shows a comparison of the ending values at retirement between these two approaches for the DB Pct Salary method. The Target Variance method shows a much tighter range, though the data in Figure V show that the chance of not hitting the retirement value is 17% greater.
Mei concluded that the explanation for this was likely that the Standard Allocation had a much higher standard deviation than the Target Variance. However, the Standard Allocation does not explicitly optimize for return given risk. She wondered if, controlling for risk and return, the Target Var pattern was superior.
In order to answer this question, she calculated the Sharpe ratios for each year of the investment up until the retirement date, with the results shown in Figure X. While the Standard Allocation strategy had higher mean and higher liklihood of meeting the funding target, it was not as optimal as the Target Variance method in terms of 'bang for the buck'.
Since pension investments are 'stranded' in the plan, in order to retain optionality on the funds a company is wise to delay contributions as long as possible. This optionality is lost when moving to a DC plan. Mei decided to see how much value had been given up by the company in order to move to the DC plan.
She added some funding logic to her previous model, essentially running the results through a test. If the value of the portfolio at the end of the year was higher than where the target value of the portfolio should be for that year, the amount funded would be limited to that required to get funding back up to target, rather than the full percentage of salary amount.
Secondarily, any value in a DB portfolio can be used to fund other employees, which the company loses when it defines its contribution.
Figure Y shows the present value of the contributions assuming one or both of these scenarios, comparing it to both portfolio asset allocation methods. By simply allowing contributions to be limited to full funding, the company reduces the cost by about 2% to 10%. When incorporating the return of excess funds to others within the plan, the cost was reduced by about 10% to 50%!
Mei let out a low whistle. "That's a lot of money we've decided to spend!" she thought.
An analysis such as we have done here is not a 'be-all, end-all' type of thing. We have added information and examples to the pool of knowledge and meaning, but there is always more that can be done.
Each time we complete an analysis, a number of questions can be asked:
- ::What are the actions we can take based on this information?
- ::What type of follow-up information is desirable?
- ::Are there specific pieces within the analysis that we would like to 'drill down' and study more thoroughly?
- ::How does this reconcile with other analyses that have been performed, and what can we learn or conclude from the similarities and/or differences?
The list goes on......
Some of these items are:
Make Allowances for Inprecision - Because normal variables come from a symmetric distribution, simulations using them produce much "tighter" results compared to expectations than do lognormal variables. This is due to the non-linear assymetry of the lognormal distribution. Figure Z shows the differences between the simulated and expected values of each of the 4 asset class parameters means and standard deviations. There is a much wider range of difference from the lognormal side. Since we live in a world where asset returns are lognormal, we need to take this inprecision bias into account when interpreting or making decisions based on our analytical results.
Don't Drop the Investment Ball - Figure V showed the percentage of times the target value (either at $1.6 million or zero) was not achieved. In almost all cases, this percentage was lower at age 90 than at retirement. This indicates that the performance of the investment portfolio during retirement - a time when it is being drawn down and depleted - is still a significant factor. One is not done once the payments begin, and the decisions during this time will be important. Don't fall asleep at the switch!
DB Plans Have an Advantage - because DB plans can keep investing at higher levels of risk (modeled in this post by haing access to higher returning investments) they reduce the probability of funding shortfalls. In addition, since they are pooling individuals, the law of large numbers works for them where it does not for the individual. Clearly employees are worse off when a company moves from one type of plan to the other.
Removing Uncertainty is Valuable - given the compelling economic advantages of the DB plan over the DC - up to 50% as shown in Figure Y, the fact that companies continue to move away from these plans to ones that are economically more expensive shows just how willing they are to remove volatility from their income statements and balance sheets. If markets were efficient, one would assume that this type of 'mirage' would be seen through by others, but perhaps because the accounting is complex, the horizons are long, and it takes time and discipline to see the results, clouds the situation enough that the inefficiencies are never exploited.
Portfolio Composition Matters A Lot - in this analysis we looked at two different methods to construct a portfolio. One was based on allocation percentages to certain asset classes based on target date funds. The other was to look at reducing variance through time and then using portfolio math to create a portfolio to achieve it. The chance an employee would not have enough funds to last their lifetime increased by 45% between the two approaches! Decisions about portfolio construction can make or break you.
Mei sat across from Aisha at the conference table in the VP's office.
After a brief exchange of pleasantries, she began to discuss the results of her analysis.
"We ran a comparison of 3 different funding strategies through 2 different types of portfolio construction methods, all from a DB and DC plan perspective, for a total of 12 different scenarios. Some of these are relevant and some are not. For instance, fully funding a DC participant's benefit likely does not make a lot of sense since they may never work at the firm all the way to retirement."
"You wanted insight into two different questions. The first was how we should fund the benefit going forward. I would recommend that we use a percentage of salary method for a couple of reasons. The first is that this is a fairly common approach in the industry so we will not appear as an outlier. The second is that the approaches that delay investments into the plans reduce the range of outcomes, so employees will be less likely to experience financial distress due to insufficient balances."
Mei showed Aisha the data contained in Figure V, pointing out the lower occurrances of not meeting target levels under this approach than the others for all but the DC Target Var approach.
"Using the percentage of salary approach also works in factors such as inflation, doesn't it Mei?" Aisha asked.
"Yes" Mei replied. "I think using a level approach would be problematic. So many things can change in 40 years, such as your point about inflation, that it is probably not..." she struggled to find the right phrase "...realistically plausible."
"To your second question, I've already shown you [note: in Figure V] the liklihood employee's do not have enough to last during retirement. In the analysis this ranged from about 1/3 to close to 80%!"
"That seems pretty high" Aisha murmered.
"Yes, I thought so too" Mei responsed. "But remember this is an artifact of the simulation. The big driver of the difference was the portfolio construction. In one we used asset allocations representing percentages common in the field, while in the other we specified a target variance. The expected return from the second approach was lower than the first, which simply suggests that if we ran the Monte Carlo again we should increase the target in order to increase the return."
She paused for a moment, and looked to make sure Aisha was still following her train of thought. It seemed that she was.
Mei continued. "The critical conclusion is not the precise number but the fact that how employees create and manage their portfolios will be the decisive determinant of whether or not they have enough."
Aisha nodded. "Yes, that makes sense. But with all this target variance and stuff, that seems a little complex, doesn't it?"
"You're right, it does" Mei agreed. Though she loved doing this type of analysis, she also knew that others were not necessarily eager to do what she had done. She also knew that she was able to do this through a lot of specialized education and training - things that others were not going to have if they had not chosen this life's path.
Mei ensured she made eye contact with Aisha to drive her point home "They are going to need a lot of help"
Aisha nodded her understanding, and the faces of some of the thousands of people impacted by this decision flashed in through her mind.
"Yes they are, Mei....yes they are."
From a purely economic perspective, a DB retirement plan allows the company to perform better than it will under an equivalent DC style fund, due to its ability to invest in a wider range of investment alternatives, its ability to take advantage of the 'law of large numbers', and its analytical resources and capabilities. Accounting data does not always make this clear when viewed from a narrow slice of time. Because of this, firms may be willing to sacrifice the long-run advantages if the short-term ones are compelling enough.
- ::Has your organization recently changed its retirement plan structure? What has been the effect?
- ::What 'next steps' would you recommend be taken to extend the analysis in this post?
Add to the discussion with your thoughts, comments, questions and feedback! Please share Treasury Café with others. Thank you!