Literature Evaluation Paper Abstract Use of pre-employment personality testing has come under fire for many years, but recently has received more attention and thought. Specifically, the education of potential applicants on the use of pre-employment screening and therefore their desire to “fake” an outcome that will result in hiring is of great concern in its use. The present paper will identify current thoughts on the attempts of applicants to impact the outcomes of the hiring process through “faking” in the selection methods.
The following three literature reviews will discuss the rationale, outcomes, and differences in each of the selected studies. Literature Evaluation Paper Personality testing and its use in employment related decisions has been at the forefront of organizational research and practice since the 1950’s. Use of various types of personality testing, ranging from the simple Color Quiz to the Meyers-Briggs Type Indicator, can help an employer to focus on a prospective employee’s specific personality traits and whether they fit the needs of the open position.
Use of pre-employment personality testing has come under fire for many years, but recently has received more attention and thought. Specifically, the education of potential applicants on the use of pre-employment screening and therefore their desire to “fake” an outcome that will result in hiring is of great concern in its use. The present paper will identify current thoughts on the attempts of applicants to impact the outcomes of the hiring process through giving what they feel are socially desirable responses in the selection methods. The following three literature reviews will discuss each of the selected studies.
Jill Ellingson, Paul Sackett, and Leaetta Hough In a study conducted by Ellingson, Sackett, and Hough, the authors wanted to evaluate whether employer’s using social desirability (SD) scales could effectively implement corrections based on the SD scale score to circumvent applicant “faking” on the test method administered, and thereby increase correct selection decisions (1999). The authors set out to first explicitly define the terms which they are studying: social desirability, faking, and intentional distortion, as well as social desirability correction.
A clear discussion was put forth to explain the method and reasoning of the research and its outcomes, as well as previous research that affected the decisions to go this route. Data for the study was collected through an independent study on the same topic, Project A. The sample included 245 currently enlisted Army personnel, was 100% male, and all represented the same occupational specialty (Ellingson, Sackett, and Hough, 1999). Participants were instructed to take the assessment twice, once in the morning and once in the afternoon, with instructions to “fake good” or “fake bad” in the afternoon assessment.
The participant pool was narrowed to 128 after the analyses were given; some were asked to “fake bad” on their instructions in filling out the data and therefore the “fake bad” scores were weeded out of the final assessment. The results of the study follow the initial hypothesis that the researchers set out; that when intended faking occurs, whether at the behest of the researcher or by the intended pursuit of a positive outcome by the applicant, the mean scores are affected in the end. When returning to the question with specific regard to the use of correction in SD scale, the results were found to be divergent.
In some cases the correction administered resulted in a higher proportion of positive outcome for hiring rates, in some cases lower. The final outcome of this average did show that the application of an SD correction does not greatly affect the correctness of selection decisions in hiring. Use of SD scales in hiring is a difficult theme to examine and requires extensive study on both the individual and aggregate levels. In this study, using only 128 applicants in the same basic grouping (male with the same occupation) with concrete instructions to “fake” their answers could limit the level of outcome and require further study.
The authors are able to cite other studies which follow their methodology and outcome to back up their findings, but also state the need to have further studies continue their work. Also, the SD scale used in this study, as it was collected in conjunction with another study on the same topic, could be seen as limited. This study is definitely seen as a scholarly work. It is included in a scholarly journal, the Journal the Applied Psychology, is included as a peer reviewed article, and has been cited multiple times in other articles ollowing it. Leaetta Hough and Frederick Oswald Interestingly, Leaetta Hough was involved in a later study on the same subject that held less consistency with its evidentiary basis. Hough and Oswald’s article was the focal article for Industrial and Organizational Psychology in a 2008 issue. Their focus on personality testing in personnel selection used a series of seven questions to examine current perspectives, one of which focuses specifically on “the concern with faking on personality tests” (Hough and Oswald, 2008).
Although studied widely, Hough and Oswald state the attention given to faking on SD scales has been varied; some seeing it as an extremely important variable in employee selection, others treating it as something to work with but not focus on. Throughout their paper, Hough and Oswald focus on the organization’s responsibility in perhaps not drawing a clear line to what they are looking for in employees through their uses of employment personality tests, rather than forcing blame on the applicant for the “faking” behavior.
Based on the thinking encouraged by Hough and Oswald, the current practices on “faking” behavior need to be shifted away from looking for what scores show as “true. ” Some companies have implemented a practice to remove those applicants with very high SD scores in hopes of removing those who have “faked” their responses. This practice has not shown to be effective in post employment productivity standards. One of the main points made in relation to the use of SD scales in employment selection is the need to focus on other personality testing to get a full spectrum of answers from the applicant.
Although some applicants may use what they deem to be more socially acceptable answers on some scales, and thereby “fake” their responses, Hough and Oswald are convinced the applicant will provide differentiating information when multiple scales are used (2008). When looking at the Hough and Oswald study as a whole it can be seen that their desire is to increase the thinking and focus on personality testing in general and SD scales specifically in the employer model for applicant acceptance. However, there are a few things that can be seen as drawbacks in their conclusions.
As opposed to the Ellingson, Sackett, Hough research, there seems to be no clear definition of the word “fake” when referring to the outcomes of SD testing. This, combined with the lack of organization on the employer’s part to set up what it is they are specifically hoping to gain through SD scale use, can be a significant issue. Also, when looking at their final conclusions, it is seen that SD measures are simply assumed to be valid in their outcomes. There does not seem to be any assumption of problems with the testing itself and therefore the need arises for further study to be conducted in this area.
Griffith and Peterson, in the following article review, implement this question of validity in their research. There also seems to be greater importance placed on the employer’s use of testing and the need to clearly “conceptualize and operationalize their objectives” (Hough and Oswald, 2008). Without a clear understanding of what the organization is looking for in an employee and without defined potential outcomes in the SD scales that are hoped for, the use of the scales are giving information about “faking” that is not useful to the organization anyway.
Griffith and Peterson focus their response to the Hough and Oswald article in the same issue of Industrial and Organization Psychology on the lack of studies correlating SD scales and faking behavior. The Griffith and Peterson response focuses on the idea that initial evidence of the validity of SD is flawed and therefore there is little empirical evidence linking the faking behavior and SD (2008). Griffith and Peterson begin their response by clearly defining their applicant faking and SD.
Scores for each of the outcomes of faking are addressed and the potential need for organized outcomes desired by employers is addressed. Their conclusions focus outside of the original statements made by Hough and Oswald and lie more clearly in questions surrounding the instrument itself as the potential flaw rather than the individual applicant’s desire to “fake. ” Once again, Griffith and Peterson call for the need for further study in the case of applicant “faking” behaviors with regard to SD.
However, the focus in this response is more on the lack of validity with regard to SD and its longstanding acceptance as a tool. Griffith and Peterson wish to increase direct study of SD and thereby increase its usefulness. The authors encourage the use of multiple personality indicators to assess future applicant behavior and use a combination of these measures to make final hiring decisions. Although seen as a response to the focal article given in this issue of Industrial and Organizational Psychology, this article seems to lack structure in its claims.
Certainly a scholarly article that addresses the topic at hand, it would be more impressive if the authors were able to more clearly represent their ideas using backup of a structured study. More involved research did uncover a more in depth study of the topic by the authors and another colleague for the Journal of Business Psychology. Conclusion The basis for all three of the reviewed articles show that there is definitely a need for increased study in the field of personality testing and applicant “faking. There is no question in the human resources field that personality testing is helpful in finding the correct applicant for the position, but there is also no question that applicants are falsifying answers in order to better their intended outcome. As shown in the previous articles, studies conducted are showing different outcomes with respect to how to handle this “faking”: by allowing for SD corrections to weed out “fakers,” increasing the use of various scales to give a more aggregate view of the applicant outside of just the SD scale, and by questioning whether the SD scale should be used at all in the selection process.
It is important to understand that regardless of the one’s views associated with personality testing, SD scales and the potential of applicant “faking,” these are widely used selection tools and they certainly require further study to determine their continued usefulness in applicant hiring. References Ellingson, J. , Sackett, P. , & Hough, L. (1999). Social desirability corrections in personality measurement: Issues of applicant comparison and construct validity. Journal of Applied Psychology, 84(2), 155-166. oi: 10. 1037/0021-9010. 84. 2. 155 Griffith, R. , & Peterson, M. (2008). The failure of social desirability measures to capture applicant faking behavior. Industrial and Organizational, 1(3), 308-311. doi: 10. 1111/j. 1754-9434. 2008. 00053. x Hough, L. , & Oswald, F. (2008). Personality testing and industrial–organizational psychology: Reflections, progress, and prospects. Industrial & Organizational Psychology, 1(3), 272-290. doi:10. 1111/j. 1754-9434. 2008. 00048. x.