Utility Analysis Essay

EXECUTIVE SUMMARY It is difficult for business leaders to open a newspaper or read a journal without seeing the warning that their only competitive advantage in the world economy is the knowledge embodied in the people they employ. Invest in your people, the gurus say, and your business will survive and profit. Yet when these leaders ask their Human Resource (HR) specialists to present them with proposals to increase the firm’s human capital wealth, these proposals are generally filled with vague claims of increased competitiveness and lower total costs.

They frequently fail because these proposals lack a sound financial analysis on which the company can justify a decision. The single most important analysis an HR professional can provide a CFO is a utility analysis for the Human Resource interventions he or she advocates. The major reason for great concern about utility analysis is that this technique ties human resource interventions to the measuring unit of the business world—dollar value.

We will write a custom essay sample on
Utility Analysis Essay
or any similar topic only for you
Order now

With the Brogden utility estimation equation one can avoid the laborious effect size indices developed by academic researchers and present to management estimates of the dollar value contribution of human resource intervention. In this report we discuss several HR interventions, identify the primary measurable benefit that can accrue to the organization from each action, and discuss the ways to quantify these benefits in the field. Utility analysis is a quantitative method that estimates the dollar value of benefits generated by an intervention based on the improvement it produces in worker productivity.

The ROI from various HR interventions such as personnel selection, recruitment tests, training and development and various others have been explained with the help of examples. Introduction The field of human resource management has been searching for ways to better assess the value of human capital development programs. The general argument is that if the impact that human resource programs have on the financial bottom line could be evaluated, the company’s decision makers would be more willing to allocate resources to further develop these programs.

With this idea in mind, researchers have devised several utility analysis techniques which can help translate traditional HR measures, such as validity coefficients and statistical distributions, into estimates of monetary profit. Utility analysis has become an established quantitative method of evaluating human resource programs. It can make a valuable contribution to judgments and decisions about investment in human resource management. Utility analysis provides managers information they can use to evaluate the financial impact of an intervention, including computing a return on their investment in implementing it.

It was introduced as a method for evaluating the organizational benefits of using systematic procedures (e. g. , proficiency tests) to improve the selection of personnel but extends naturally to evaluating any intervention that attempts to improve human performance. Utility analysis of human resource intervention has emerged as a focused topic of research in the 1980s. The major reason for great concern about utility analysis is that this technique ties human resource interventions to the measuring unit of the business world—dollar value.

With the Brogden utility estimation equation (Brogden 1946, 1949), one can avoid the laborious effect size indices developed by academic researchers and present to management estimates of the dollar value contribution of human resource intervention. The classical model of utility analysis, called the BCG-utility model after its originators Brogden,Cronbach and Gleser ?U = N T SDy dt – C The value of the selection program is measured in dollars and is represented by the symbol ? U. N is the number of accepted applicants who remain with the organization for a time period T.

SDy represents the standard deviation of job performance (in Dollars) in the applicant population The part of the utility function to the left of the of the subtraction sign estimates the return produced by the intervention. From this sum the costs of the measures (C) are subtracted. BASIC ASSUMPTIONS The first assumption of utility analysis is that human performers generate results that have monetary value to the organizations that employ them. This assumption is also the basis on which people claim compensation for the work they do.

The second assumption of utility analysis is that human performers differ in the degree to which they produce results even when they hold the same position and operate within like circumstances. Thus, salespersons selling the same product line at the same store on the same shift will show a variation in success over time with a few doing extraordinarily well, a few doing unusually poorly, and most selling around the average amount for all salespersons. This assumption is broadly supported in common experience and in research.

It is, for example, the basis on which some performers demand and receive premium compensation. The direct implication of these assumptions is that the level of results produced by performers in their jobs has different monetary consequences for the organizations that employ them. Performers are differentially productive and the productivity of performers tends to be distributed normally. How Utility Analysis Builds on These Assumptions The approach of utility analysis asserts that the utility of any intervention can be valued by determining how far up the productivity distribution the intervention moves the performer.

The distance the performer is moved is translated into a productivity gain and the dollar value of that productivity gain is what is termed the utility (U$) of the intervention. What Is Needed to Complete a Utility Analysis? In completing the analysis, the performer needs to generate the following: * A method for measuring role productivity, * A way to assign monetary value to role productivity, * The distribution of productivity among performers of the role, * The dollar value of a one standard deviation difference in role productivity (SD$), and * A method to measure the intervention’s impact on role productivity.

With these elements of information, the analyst can compute the utility of the intervention in dollars. To accomplish the analysis, the analyst must be skilled in the methods of quantitative analysis in general and utility analysis in specific. This person needs to be aware of the variety of ways one can measure human productivity, determine its monetary value, and gauge the affects of interventions on participant performance. Given that there are a variety of methods for computing utility, the exact resources needed for the task will depend on the method the analyst selects.

The least set of resources anyone will need are: * Access to the people who will be using the results of the study to make decisions; * The identity of the intervention whose utility you will measure; * A subject matter expert who is knowledgeable of the intervention; * A description of each affected role including its duties, outputs, and success criteria; * The compensation scale for each affected role; and * A subject matter expert who is knowledgeable of the role(s) affected by the intervention. Getting Ready for the Analysis 1. Understand the people whose decision-making the study will support.

Meet the people who will use study’s findings in order to understand what information they are seeking, what decisions they will use the information to make, and any issues or concerns they may have about the study. Alert them to your ongoing need for their feedback on the methods you will propose for accomplishing the study. Assure them that you will guarantee that the methods you propose satisfy the professional criteria, but that their feedback is needed to ensure that the methods are also credible in their eyes and the eyes of anyone with whom they will share the results. . Learn about the intervention you will assess. Tip: Identify the intervention whose utility you will measure and contact the subject matter expert who is knowledgeable about it. Learn about the intervention’s purpose, target population, content, operations, cost, and any metrics used to measure its implementation and effects. Also, uncover what the thinking is about how the intervention affects the productivity of the performers it targets. With these facts, you can determine what information needed for the analysis exists and what information you will need to develop. . Learn about the role(s) whose productivity is affected by the intervention. Tip: Obtain a description of each affected role. Contact the subject matter expert who is knowledgeable of each role. Learn each role’s purpose, duties, outputs, and success criteria. You also need to understand how the role is valued from a compensation perspective. For example, is compensation linked to output or is it paid as a salary? You will want to understand, as well, how the company values the output of each job. If the output is sold, is it valued by cost or price?

And you need to uncover how much responsibility each role has for the outputs its performers produce. Finally, for each job that is salaried, obtain its compensation scale and the average salary paid to its incumbents. If salaries are not normally distributed, you may need to obtain either the median or modal salary instead of the mean. 4. Determine how to measure the productivity of the performers of each role. Tip: You will need to develop a productivity measure and a method for determining the status of each role incumbent on the measure.

You will need to use your understanding of each affected role and the assistance of its subject matter expert. The subject matter expert will have to approve the method of measurement you devise, otherwise your approach to measuring productivity will not have credibility in the workplace. In devising the productivity measure, it is preferable to base the measure on production of correct outputs—for example, the total amount of sales generated less returns or the number of welds made per unit of time less the number of welds that fail inspection.

Outputs are the tangible contributions a role makes to an enterprise and measuring the quantity, quality, and complexity of outputs generated by performers is usually a measure of productivity that is readily accepted. Sometimes, however, a workplace will not accept a measure of productivity that is tied to outputs. In these situations, you still need a way to measure how well the role is performed. Sometimes supervisor ratings of successful performance are used or multirater approaches that use rating of supervisors, peers, and subordinates (when appropriate).

If the workplace will not agree that different performers achieve different levels of success or that the level of a performer’s success in performing the role can be measured, then the utility analysis cannot be done. Once you have devised a measure of productivity, plan how you will gather information about the status of role incumbents on the measure. Your method must be feasible meaning that its cost must be reasonable, its result credible, and its burden on participants acceptable. 5. Determine how to value role productivity in dollars. Tip: The method you choose will be determined by how you measure productivity.

If you use a method that calibrates outputs produced, then you will assign monetary value based on the dollar value of the outputs. If the job produces an interim output, some component of a larger final product, then determine the component’s contribution to the total product and determine the value of the role’s output by adjusting the value of the final output. Material outputs can be valued based on cost or sales price. Service outputs that are used in-house (e. g. , a marketing plan, a processed personnel action) can be valued using market pricing—that is, what it would cost to purchase the service from external sources.

If you are not using a measure of productivity that is tied to output, then you can use the typical salary paid for the job (i. e. , mean, median, or mode). Salary is acknowledged as reflecting the value a role contributes to a company. 6. Decide how to measure the affect of the intervention on role productivity. Tip: Basically, you need to find a mathematical bridge that relates participation in the intervention and change in role productivity. There are very many ways to accomplish this. One way is to use a control group comparison.

Here, you identify two sets of people who are comparable in all important ways except that one set went through the intervention and the other did not. You compare the differences in productivity of these two sets of people. If the intervention was effective, the people who went through it will have higher productivity scores and the difference between the groups will represent the intervention’s impact on productivity. Another way is to use correlational methods to associate some indicator of participation or benefit from the intervention with scores on role productivity.

Be sure that the information with which you are working satisfies the requirements of the statistical method you use and that your approach makes sense to the people who will use the results of the analysis. Your solution needs to satisfy both professional standards and credibility to provide benefit. 7. Create a plan for the utility analysis. Tip: Be sure your plan documents how you will produce each of the information elements needed to accomplish the utility analysis. Include in it any decision rules you will apply in making judgments.

For example, if you are also computing a return on investment ratio, what rule will you apply to decide if the ratio is positive? Will 1. 0 be sufficient? Will the ratio need to be 2. 0 or higher? In a professionally conducted analysis, all decision rules must be documented prior to the study. Doing the Analysis 1. Determine the productivity of performers. Tip. Execute your plan for measuring the productivity of current role incumbents. 2. Determine the dollar value of a one standard deviation difference in role productivity (SD$). Tip. Distribute the productivity scores you gather.

Confirm the distribution is essentially normal and compute its mean and standard deviation. If the distribution is not normal, use a transformation method (e. g. , z-transformation) to normalize it. Apply your method for valuing role productivity. Derive the dollar value of productivity achieved by average performers and the dollar value of a one standard deviation difference in productivity (SD$). 3. Compute the effects on performer productivity associated with the performer’s participation in the intervention being evaluated. Tip. Apply your method for measuring the affect of the intervention on productivity.

Determine how many standard deviations of change in worker productivity the intervention produces (SD). 4. Compute the dollar value of productivity improvements generated by the intervention. Tip. The dollar value of productivity improvements generated by the intervention is the intervention’s utility (U$). To compute utility, multiply the number of standard deviations of change the intervention produces in worker productivity (SD) and the dollar value of a one standard deviation difference in productivity (SD$) (SD x SD$ = U$). Multi-attribute utility analysis

After years of applying and fine tuning these techniques, researchers are puzzled by the negative reaction utility analysis produces in line managers and top executives. For instance, Latham and Whyte (1994) found that using utility analysis to influence manager’s decisions to implement a selection procedure actually lowered their support for the intervention. Some researchers think it is a question of presenting utility information in a way that managers are better able to understand. According to them, managers will not accept the results of utility analysis unless they really understand how it works.

Unfortunately, attempts to present more understandable utility information, while having a low-to-moderate positive effect on acceptance, still resulted in disappointingly low acceptance levels. Thus, the problem of how to convince the organization as to the importance of HR practices has been translated into a more complicated problem: how to convince organizations of the importance of utility analysis as a reflection of the importance of HR practices! Utility analysis is based on the assumption that money is the only language that is clearly understood by the organization decision makers.

However, managers might have more complicated, perhaps multidimensional or even qualitative, models of what they expect from investments in HR. Roth and Bobko (1997) have proposed a multi-attribute utility analysis which attempts to translate the benefit associated with a given human resource practice into an array of units (not necessarily monetary) that reflect the kind of information managers typically consider in their decision making processes. MAU analysis requires that decision makers first make a list of attributes that they consider important for making the final decision.

The authors present an example of a selection system for which relevant attributes may include diversity, legal exposure, and organizational image in addition to increased value of job performance. Each of the chosen attributes must be measured by a common metric, for example effectiveness points, and combined into a single composite number which represents the benefit of each of the possible interventions. The aforementioned example actually shows a reversal in the decision made using multi-attribute utility analysis versus that which would have been made using traditional utility analysis.

MAU analysis is presented as having the added advantage of requiring the participation of decision makers in choosing, measuring and weighing the relevant attributes, and this participation is believed to increase acceptance of final recommendations. But the most important value of MAU is that it may offer a representation of utility that fits better with the multiple outcome models decision makers need in order to run their businesses. Strategic Utility Analysis The goal of utility analysis is to assess the contribution made by one specific initiative to the organization’s objectives.

Strategic utility analysis is utility analysis that is based on the multi-attribute philosophy and incorporates key concepts drawn from the Balanced Scorecard methodology. From MAU analysis it borrows the idea that the utility of a HR intervention is best represented as an array of outcomes, not all of which need to be financial in nature. As it is argued by MAU advocates, this alternative may better reflect decision making processes in the organization and may, therefore, yield more useful information.

The MAU approach, however, raises the problem of the selection of the specific attributes to be included. Strategic approach suggests that the answer to this question is to be found in the strategic measurement system of the organization, and it propose as a concrete alternative, the Balanced Scorecard system. By selecting the attributes this way, not only can it provide more useful evaluations of utility, but it emphasizes and support the role of HR management as a strategic tool.

Different strategies will require different organizational capabilities and HR interventions should contribute directly to building these capabilities. Thus, the value of HR programs should be determined by assessing how well they help build the organization’s strategic capabilities. THE BROGDEN UTILITY EQUATION The basic Brogden utility equation has the following form: U = N T SDy dt – c (1) where: U = the total change in utility in dollars after the training program; N = the total number of trainees;

T = the number of years of duration of the training effect on performance; dt = the effect size index, which is the difference in job performance between the average training and untrained employee in standard deviation units; SDy = the standard deviation of job performance in dollars for the untrained group; and, c = the total cost of the training program. Although most of the parameters in Equation 1 can be easily estimated, the use of Brogden’s equation is not unanimously accepted by scholars. The major problem concerns a critical parameter in the equation, the standard deviation of job performance in dollars (SDy).

Different estimates of SDy have been suggested but many researchers have questioned the reliability and validity of these estimates. On top of the difficulties in estimating SDy, the utility analysis movement has also aroused heated debate among scholars as to how the final utility estimates should be interpreted. If estimated mean change in utility after using a valid human resource intervention is $9 million a year for steel what does this $9 million mean? Is this decrease in cost, increase in sales, incremental cash flows, or increase in the value of the firm’s stock?

Some scholars advocated that dollar value utility estimates should be considered as future cash flows and human resource interventions should be evaluated as any other capital investments. On the other hand, Hunter, Schmidt and Coggins (1988) held strongly that techniques like capital budgeting and financial accounting are “conceptually and logically inappropriate” for dollar value utility estimates from the Brogden model. They argued that the Brogden model defined increase in utility estimates from the Brogden model.

They argued that the Brogden model defined increase in utility as “increase dollar value as sold”, but not “contribution to after-tax profits”. These utility estimates should, therefore, not be mixed with estimated future cash flows of other capital programs. Most researchers assumed that the answers to these questions lay in the critical variable standard deviation of job performance in dollars, SDy. The usual interpretation is that if SDy is referring to the firm’s stock price, the final utility estimate will be change in stock price after initiating the intervention.

Since most existing SDy estimation methods involve subjective estimates of subordinate worth by supervisors, Bobko et al. (1987) concluded that we would better understand the meaning of the final utility estimates if we knew how supervisors judged the overall worth of subordinates. But it will be shown later that unit of measurement in SDy is not the most important concern in interpreting final utility estimates. Instead, it is the matching of SDy estimates with effect size estimates that matters.

In this paper, the question of whether we should treat human resource intervention as other capital investment programs is left open. Instead, the following discussion will focus on the question of what kind of SDy and effect size estimates are needed for utility estimates with different purposes. 2. The Problem of Interpreting the Brogden Utility Equation With the usual definition of the effect size index (dt ), the utility equation can be expressed in the following alternative form: U = N T SDy dt ye- yc – c (2) ——- s where: ye = the mean criterion measure for the trained group; c = the mean criterion measure for the untrained group; and, s = the standard deviation of the untrained group. The effect size index (dt) is a measure of the average increase in criterion measure in the training group in standard deviation units after the training program. If a pretest/posttest control group design is used to evaluate the training program with performance ratings as the criterion measure, the effect size index indicates the difference in mean performance ratings between the experimental (trained) and control (untrained) group after the training program in standard deviation units.

The usual interpretation of the utility is that the effect size index is unit-free. It is SDy which translates the unit-free effect size estimates into monetary utility estimates in the utility equation. In the above example of a pretest/posttest control group design, an effect size of 1. 8 means that, on average, performance ratings of trainees will be increased by 1. 8 standard deviation units after the training program. An SDy of $10,000 means that 1. 8 standard deviation units would be worth $18,000 to the organisation.

Thus, on average, the training program increases productivity by $18,000 per trainee. An even clearer interpretation of the utility equation would be as follows: If dt and SDy are both measured in dollar value sales volume, the two standard deviation terms in Equation 2 (i. e. , SDy and s ) will be exactly the same and can be cancelled. The final utility estimate will then be the difference in dollar value sales generated by the trained and untrained group. In that case, it is clear that the final utility estimate from the utility model is measured in dollar value sales volume.

Two important assumptions in the above interpretation First, there should be one single dt estimate that is independent of the unit of measurement of SDy. Second, if the dt estimate is dependent on the SDy estimate, the unit of measurement for both should be exactly the same. In the following discussion, it is first argued that dt estimates are dependent on the proxies of job performance being used in the validation study. It is then further argued that the job performance proxies in most dt estimates are different from the SDy estimate.

The combined result of these two arguments is that a lot of existing dollar value utility estimates are very difficult to interpret. After understanding that dt may vary with different measures of job performance, the problem of interpreting the utility equation when the unit of measurement of dt does not match SDy is obvious. If dt varies with different measures of job performance, its unit of measurement should be exactly the same as the unit of SDy estimate in order for the two standard deviations of Equation 2 to cancel out.

But when we look at the effect size measures and SDy estimation techniques available in the literature, we find that this assumption is hardly satisfied. Landy and Farr (1983) concluded that three-quarters of various performance measures in published research studies used judgmental criteria as the primary criterion variable. Therefore, the correlation between scores on the selection instrument and future performance ratings is commonly used as a proxy of the correlation between predictor score and dollar value productivity in the Brogden utility model.

But if supervisory ratings are used as the unit of effect size measure and if future cash flow is the desired unit of final utility estimates, then SDy should be interpreted as “the increase in future cash inflow attributed to each standard deviation increase in performance ratings of subordinates”. It is clear that none of the common SDy estimation methods described in the literature refer to future cash inflow attributed to increase in performance ratings of employees only.

Estimated future cash inflow may be one of the factors affecting some current SDy estimated, but it is not the only determinant of the existing SDy estimates. It is therefore concluded that with the existing definitions and estimation methods of SDy, treating the final utility estimates as future cash inflow may be contaminated and inappropriate. When two different operationalisations of job performance used in estimating dt and SDy have a small correlation with each other, the final utility estimate would be very difficult to interpret.

For example, it would be questionable to multiply an effect size measure of “increase in two standard deviation units in performance ratings” to a SDy estimate of “one standard deviation unit of objective output means $10,000” to get a final utility estimate when the correlation between performance ratings and objective outputs is around 3. The Raju, Burke and Normand (1990) New Model of Utility Estimation The above distinction between dollar value productivity and performance ratings as the criterion in utility analyses was obscurely implied in the new utility model proposed by Raju et al. 1991). Raju et al. Accommodated measurement error in their model and produced the following modified equation of utility estimation: U = N T SDr dt ye- yc – c (3) ——- SDr where: U = the mean change in utility after a human resource intervention; N = the total number of selectees; A = a constant to be estimated; SDr = the standard deviation of actual supervisory ratings for the position of interest; Ye = the mean criterion measure for the experimental group; Yc = the mean criterion measure for the control group; and, c = the total cost of the selection process.

A simple comparison of the Raju et al. model (Equation 3) and the Brogden model (Equation 2) shows that the following equality must be true for the utility estimates from the two models to be equal: A SDr = SDy. (4) Equation 4 shows that if supervisory ratings are used as the criterion measure in the effect size index in the Brogden model, SDy should be interpreted as the standard deviation of performance ratings multiplied by a constant A. It is therefore clear that the constant A in the Raju et al. model is used to translate standard deviation of performance ratings into dollar value utilities.

In fact, the only difference between the two models is that instead of estimating SDy directly (which translates different effect size measures to dollar value utilities), Raju et al. suggested that we should estimate two modified components of SDy: the constant (A), and the standard deviation of performance ratings. As explained, if the criterion in the effect size estimate is performance ratings, then SDy in the Brogden equation gives the dollar value utility of “one standard deviation change in performance ratings after the intervention”. In the same way, the constant A in the Raju et al. quation tells us the dollar value utility of “change in performance ratings after the intervention in raw score form”. The constant A translates “difference in performance ratings in raw score form” into dollar value utility because standard deviation of ratings in the denominator of the effect size measure is cancelled out in the Raju et al. equation. If according to Raju et al. the constant A is the slope of the regression line when dollar value utility is regressed on performance ratings, the unit of measurement of A should be “dollar value utility per unit change in performance rating”.

The Raju et al. approach, therefore, suffers from the same problem as the Brogden utility equation when SDy and effect size are measured in different units which may not be highly correlated. Raju et al. suggest the use of mean compensation as an estimate of A. This operationalisation of A is valid only under the assumption that mean compensation can be interpreted as “dollar value utility per unit change in performance rating”. Otherwise, a better measure of A should be developed when the final utility estimate using the Raju et al. quation is meaningful. 4. Implications for Future Development and Utility Estimates It follows from the above discussion that the proposed direction by Bobko et al. (1987) in understanding the SDy estimated by different subjective supervisory estimation procedures is not recommended if human resource interventions are to be treated as other capital investment programs. The objective of getting the dollar value utility estimates should be known first.

If the attitude by Boudreau (1983a) and Cascio and Morris (1990) is correct and dollar value utility estimates of human resource interventions that are directly comparable to other capital investment projects are needed, we should be looking at new definitions of SDy. As illustrated, if supervisory ratings are used as the criterion in the effect size measure, SDy should then be defined as “the increase in future cash inflow attributed to every standard deviation increase in performance ratings of each employee”. With this definition, SDy will be extremely difficult to estimate.

One simple solution to the above problem is a shifting of the effect size measures in the future. Using a large number of measures of the effect of a human resource intervention in terms of change in cash flows, it is possible to summarise the average effect size using the technique of meta analysis. With the average effect before and after the intervention measured in change in cash flows, it is not necessary to estimate SDy and effect size separately because the effect size measures are in dollar terms already.

This is the approach used by Florin-Thuma and Boudreau (1987) in estimating dollar value utility of a simple feedback program. This problem of this approach is that the use of company cash flows as a criterion measure in an experiment can be highly contaminated by factors other than the experimental treatments. Therefore, virgorously controlled experimental designs are needed. In addition, it is difficult to relate any psychological intervention directly to change in cash flow of the organisation. With this approach, practicality is a major concern.

The alternative answer to the problem is to take a realistic look at the effectiveness measures of psychological interventions and investigate if they can be matched with the current SDy estimates. That means, on top of Bobko et al. ’s (1987) question of construct validity of SDy, one has to pay serious attention to the construct validity of effectiveness measures of psychological intervention. For example, if one wants to use supervisory ratings in the utility equation and is interested in future cash flow, one has to define SDy as the change in future cash flow per unit change in standard deviation of performance ratings.

Corresponding methods for estimating this SDy parameter must be designed. Finally, if the Schmidt-Hunter supervisory SDy estimates are continually used in the future, it is necessary to understand the limitations and assumptions behind them while interpreting the final utility estimates. First, the utility estimates would be measured in dollar value productivity (outputs and services) rather than future cash flow. Second, it is assumed that the effect size measure with supervisory ratings as the criterion (if this is the criterion variable used) is the same as effect size measure with dollar value productivity as the criterion.

Without enough evidence that the two are virtually equivalent, accuracy of the obtained utility estimates will be highly questionable. MeasurementApproach| IllustrativeMeasurements| Observations| TraditionalEvaluation of HRPrograms| New-hire skills,trainee knowledge,changes in attitudes,turnover levels| A rich source of information on program effects, but statistical results are not easily translated to reflect organizational goals. Statistical presentations may be daunting to many organizational constituents. Utility Analysis forSpecific Programs| Knowledge, skills,performanceassessments,transformed to dollarvalues and offset withestimated costs| Wide array of approaches estimating the payoff from human resource program investments. Useful logic andrigor, but the complexity andassumptions may reduce credibility and usefulness| FinancialEfficiencyMeasures of HROperations| Cost-per-hire, time to- fill, training costs| Compelling explicit dollar-valuecalculations and comparisons, but may over-emphasize human capital cost relative to value. HR Activity and“Best Practice”Indexes| “100 Best Companiesto Work For,” HumanCapital Benchmarks| Focus on specific HR activities provides a useful link to specific actions. Tantalizing results showing that HR practices correlate with financial outcome measures. Causal mechanisms and direction may be unclear, leading to incorrect conclusionsand actions. | Multi-AttributeUtility (MAU)| ProMES applied toHR programs,specific MAU modelsbuilt for particularorganizations| Useful method for explicating the underlying value dimensions. Can incorporate non-linearities and nondollar outcomes.

The participant requirements can be daunting. Generally rely heavily on self reported and subjective parameters. | HR Dashboard orBalancedScorecard| How the organizationor HR function meetsgoals of “Customers,Financial markets,Operationalexcellence, andLearning”| Vast array of HR measures can be categorized. “Balanced Scorecard” is well-known to business leaders. Software can allow users to “drill” or“cut” HR measures, to support their own analysis questions. Potential for naive users to misinterpret or mis-analyze theinformation. | FinancialStatementAugmentation| Supplements toannual reports (e. g. Skandia and ABB);Human Capital“Navigator”| Reporting human capital factors with standard financial statements raises the visibility of HR. A vast array of humanresource and human capital measures can be reported. The link between reported measures and organizational and investor outcomes remains uninvestigated. “Information overload” can result without a logic framework. | FinancialStatementReconciliation| Human ResourceAccounting,Intangible AssetMeasurement,“Putting HumanCapital on theBalance Sheet”| Reliance on standard financialstatements or accounting logic may be compelling to financial analysts.

Acknowledges the limitations of financial analysis to account for human capital. May be limited in its ability toinform decisions about human resource program investments. | Example of a Utility Analysis X Company has to evaluate a contracts management course offered on a fee-for-service basis by the human resource department of a government agency. The course trained contract officer’s technical representatives (COTRs) in following: * How to specify requirements * Build a request for quote or proposals Evaluate bidders * Select and contract with the best supplier * Manage contract performance * Ensure the delivery of the needed products or services on time, at cost, and to specifications. One of the questions being asked was whether the course returned a monetary value greater than its cost. Company proposed a utility analysis as the means to assess the monetary benefits produced by the course and a return on investment analysis to determine the ratio of benefits received to the cost expended.

Prior to these evaluations, we determined that the content offered by the course was relevant to the COTR role and that the course participants did demonstrate increased proficiency in their performance as a result of completing the course. Measuring Productivity and Determining Its Monetary Value With the role identified, company studied the job it accomplished by reviewing its tasks, outputs, and performance expectations. No measure of productivity existed—yet the means for deriving a measure appeared evident. First, the COTR role had a defined output and criterion for judging success.

COTRs were responsible for successfully satisfying a product or service need within their agency through contracting. Successful satisfaction of the need meant the timely delivery of products and services that met technical specifications and the accomplishment of these ends at the cost specified. Second, there was a monetary value associated with the output. The dollar value of every contract a COTR managed was systematically determined. Third, there was a logical way to relate the monetary value of the role’s output and its success criterion.

A COTR realized the value of a contract to the degree that the contract was concluded on time, at cost, and to specifications. Conversely, to the degree it was not concluded on time, at cost, and to specifications, monetary value was lost. To measure role productivity, company developed and tested the COTR Productivity Rating Form. This form measured the degree to which each COTR brings in his or her assigned contracts at cost, on time, and to specification. It also calibrated the importance of each of these factors for the contracts managed and the typical degree of control over each contract the COTR has.

The different component measures were converted into an overall productivity score. This score was a percentage that represented the degree to which each COTR realized the value of the contracts he or she manages on a yearly basis. The COTR Productivity Rating Form was sent to the 266 supervisors of COTRs randomly selected so that the ratings would reflect the status of COTRs in the general population. One hundred and thirty (130) responses were received (48. 9% response rate). The response level provided estimates of productivity that were accurate to +/- 5% at a 95% level of confidence.

Establishing the Dollar Value of Productivity Company completed three steps to determine the dollar value of improved productivity. First, we distributed the productivity scores achieved by COTRs to determine their average level of productivity. Exhibit 1 depicts the distribution of COTR productivity. The average performer is 81. 65% successful in extracting the controllable value from the contracts he or she manages. In contrast, the exemplary COTR realized 94. 04% of the value from the contracts he or she manages and the poor performing COTR extracted only 69. 6%. To calibrate the value of productivity in dollars, the study used the median face value of contracts managed by COTRs during one year as modified by the control the COTR has over the outcome of the contracts. The degree to which a COTR brings in his or her assigned acquisitions at cost, on time, and to specifications determines how much of the controllable dollar value of those contracts is realized. The median face value of contracts fulfilled per year by COTRs was $500,000. Corrected for the degree of control COTRs have over utcomes, as perceived by their supervisors, the median potential single year benefit a 100% productive COTR produces is $397,525. By multiplying the average actual productivity of COTRs (81. 65%) against the controllable dollar value of the contracts a COTR manages on a yearly basis ($397,525), the study estimated the dollar benefits generated by the average performing COTR at $324,583. 13. Poor performing COTRs— that is, performers achieving at or below the 15th percentile of all COTRs—generated only $274,344. 10 of value each year.

Exemplary performing COTRs, defined as incumbents whose productivity was at or above the 85th percentile of all COTRs, generated $373,895. 44 of value. Establishing the Dollar Value of Productivity Improvement To determine the monetary value of improvement in productivity, the study computed the dollar value of one standard deviation in change (SD$) in role productivity. The SD$ for the current distribution of performers is $49,239. 04. This means that if some intervention advanced the productivity of a COTR by one standard deviation, that COTR would generate $49,239. 4 in additional benefits to the agency each year. Measuring the Course’s Affect on COTR Productivity The correlation between COTR’s job proficiency and productivity ratings served as the mathematical bridge for estimating the course’s impact on performer productivity. The elements required to use this bridge were the amount of proficiency change produced by the course, the regression coefficient (beta) relating job proficiency scores to productivity ratings, and the standard deviation of productivity scores. Applying these elements, the course advances COTRs upward in productivity by . 547 standard deviations. Determining the Course’s Utility As stated above, utility is the dollar value of the increased productivity of a single COTR that is generated by the course. To determine the utility of the course, the study translated the distance the course advanced COTRs along the productivity continuum into dollars. As reported, the course advanced COTRs . 1547 standard deviations up the productivity continuum. We previously determined that one standard deviation change in productivity has a monetary value of $49,239. 04.

Multiplying this amount by the . 1547 provides us the course’s utility (. 1547 x $49,239. 04 = $7,617. 27). This figure ($7,616. 18) is the dollar value of the improvement in productivity evidenced by each COTR as a result of training Assessing Return on Investment The return on investment (ROI) was computed using the conventional method of dividing the dollar value of the productivity benefits generated by the course by the cost of participating in the course. In this study a desirable ROI was defined as any value greater than 1.

The study determined the per student cost for completing the COTR course. It added the fee charged to departments for each COTR taking the course with the cost of lost opportunity associated with the COTRs not performing their regular job during the 10-day period of the instruction. This fee ($700) included all expenses associated with the course. The cost of lost opportunity was computed by dividing the salary of the typical COTR who participated in the course (GS-14, Step 1) by the number of hours that define full time employment in the Government (2,087).

This per hour cost is then multiplied by the 80 hours that the COTR is off the job. The opportunity cost per student was $2,385. 63. The total cost for participating in the course was computed as $3,005. 63 per COTR. The ROI for the course was 2. 53 ($7,616,18/$3,005. 63) for one year of COTR performance following completion of the course. This means that for every dollar invested in completing the course, the sponsoring department receives $2. 53 in benefits the first year. Any reasonable assessment of return should recognize that the benefits of the course extended forward.

Given the general stability of the content the course teaches, a three year period for return on investment was considered conservative. Within 3 years, the total productivity improvement benefit is $22,848. 54 and the ROI is 7. 60— meaning, for every dollar spent, $7. 60 in agency benefits is generated Utility analysis for assessing the effectiveness of selection processes As such, utility theory might be used to determine the dollar value of a selection test that enables an employer to identify and hire managers for a specific job whose productivity is higher than those hired without the test.

The calculations of utility might involve several variables. For example, validity of the selection test would be a critical variable, in that it provides an indication of the predictive ability of the test. Additionally, the increased production, its contribution to profitability, and the standard deviation of the contribution, would be variables in the calculations. Finally, other variables might be included in the analysis, such as the cost of testing enough applicants to obtain a sufficient number having scores above the cut-off point

Utility analysis may be used as an evaluation approach for assessing the effectiveness of selection processes. An interesting feature of the utility analysis approach to cost–benefit analysis is that it can be linked with capital budgeting processes. In turn, this link with capital budgeting may make such evaluation of much greater interest to general managers. Like capital budgeting, utility analysis of selection procedures deals with projected cash flows, such as in the form of savings from increased productivity.

The results of the utility analysis of such cash flows can be expressed as net present values. 4 Example A company has decided to pursue a strategy of increasing its market share. To implement this strategy, it needs to 15 additional sales persons this year. The company’s current selection procedures are based solely on interviews that have a validity coefficient of . 15, while the new test will produce a validity coefficient of . 30. Validity coefficients indicating the accuracy of selection procedures range in value from . 0 to 1. 0, with values of 1. indicating perfect prediction. 41 Because the new test holds out the prospect of making better selection decisions (i. e. , selecting a higher percentage of applicants who turn out to be better salespersons), an estimate of the dollar value of better performance is needed to determine the benefit of the new procedure. The company knows the contribution margin for each salesperson, which is sales minus all variable costs including costs of goods sold and his or her compensation. Based on these data, the company determined that the value of better performance is $20,000.

Given the $20,000 value of better performance, utility analysis then involves comparison of the associated costs with the benefits. In order to make comparisons, the company has deter-mined that its current interviewing costs are $100 per applicant and that the new test, which requires special scoring, will cost $250 per applicant. It also knows that salespersons stay with the company for an average of three years. If the company chooses the top 10 percent of all applicants, the formula provided by Schmitt and Klimoski indicates that the utility of the new test would be approximately $215,000.

Example 1 Use utility analysis to measure the Return of Investment (ROI) of training activity conducted by the HR of an organization. The effect size is to be taken as 0. 4. The standard deviation of job performance in dollars for the untrained group is $7500. The number of years of duration of the training effect is 1 year. No. of employees going under training is 250. Cost of training per employee is $600 The utility analysis equation is given below: U = N T SDy dt – c where: U = the total change in utility in dollars after the training program; N = the total number of trainees;

T = the number of years of duration of the training effect on performance; dt = the effect size index, which is the difference in job performance between the average training and untrained employee in standard deviation units; SDy = the standard deviation of job performance in dollars for the untrained group; and, c = the total cost of the training program. Now, dt, that is effect size of the training program is given to be 0. 4 or 40% SDy, that is the standard deviation of job performance in dollars for the untrained group is $7500 T, duration of training effectiveness is 1 year N, no. of employees is 250

C, cost of training per employee, is $600 Thus, Financial Utility = 0. 4*7500*1*250 -250*600 = 6,00,000 Return on investment (ROI) = Output/input = Financial Utility / Cost incurred = 6,00,000 / (250*600) = 4 Thus ROI is 400% Example 2 Use utility analysis to measure the Return of Investment (ROI) of selecting employees using assessment centre over tradition hiring method. The effect size is to be taken as 0. 5. The standard deviation of job performance in dollars for the traditionally hired is $11000. The number of years of duration of the effect is 3 year. No. of employees going under assessment centre is 50.

Cost per employee is $3100 The utility analysis equation is given below: U = N T SDy dt – c where: U = the total change in utility in dollars after selecting through assessment centre; N = the total number of hires; T = the number of years of duration of the effect of selection on performance; dt = the effect size index, which is the difference in job performance between the average persons hired through assessment centre and employees hired from tradition process; SDy = the standard deviation of job performance in dollars for the traditionally hired group; and, c = the total cost of the assessment centre program.

Now, dt, that is effect size of the assessment centre program is given to be 0. 5 or 50% SDy, that is the standard deviation of job performance in dollars for the traditionally hired group is $11000 T, duration of effectiveness is 3 year N, no. of employees is 50 C, cost per employee, is $3100 Thus, Financial Utility = 0. 5*11000*3*50 -50*3100 = 6,70,000 Return on investment (ROI) = Output/input = Financial Utility / Cost incurred = 6,70,000 / (50*3100) = 4. 32 Thus ROI is 432%

×

Hi there, would you like to get such a paper? How about receiving a customized one? Check it out