human hand touching research screenResearch and Development

The body of the CareerFitter assessment is a mixture of personality testing and occupational research derived from the original assessment “The CareerFitter.” the root of the assessment combines occupational research with decades of foundational work from leading psychologists. Since that time, there have been countless authors and researchers who have utilized and built upon these early findings. The organization that developed our assessment began adding to this development and research in 1998, refining and focusing on the career aspects of these earlier works to develop the algorithms that make up the backbone of the CareerFitter online test.

The initial focus groups of the assessment were various organizations, employees, and individuals throughout the United States. The organizational goals were to decrease turnover and establish a baseline of desirable and definable employee attributes. To proceed with development, the organization employed the assistance of corporate managers, supervisors, team leaders, and top performers in their respective fields. To minimize turnover and establish more effective parameters for recruiting new employees, the development team found that the assessment provided statistically significant characteristics and traits in the employees who excelled at their respective positions. As a direct result, organizations and Human Resource departments experienced fewer turnovers and simultaneously experienced a documented increase in employee productivity and satisfaction. As a result of implementing and using the career assessment and the profile results, organizations narrowed the selection focus for their recruiters. Since the initial research began, has provided career assessments and profiles to countless organizations and individuals worldwide. In fact, since providing the online version, the assessment has unpredictably crossed cultural boundaries and language barriers. To date, the assessment has reached individuals from every continent.

Beginning in 1998, a disparate impact study was conducted in conjunction with adjunct professionals with expertise in Applied Psychology. The procedure and analysis of the study were completed following the guidelines and standards of the American Psychological Association, as well as the principles for validation and personnel selection as endorsed by the Society for Industrial and Organizational Psychology. The sample used for this study was one that closely resembles the pool of applicants who might be tested using our method of assessment. The sample allowed analyses of several protected groups (females and non-white minorities) as defined by current statutory law. The results indicated that members of either protected group did not score significantly lower on the assessment instrument than other individuals. Thus, it was concluded the assessment does not “adversely” impact members of the previously mentioned groups; that is, there is no evidence of disparate impact against members of these groups, and the basic responses were consistent across demographic samples.

At the completion of the initial findings, immediate efforts were implemented that further supported the validation process through the voluntary participation of individuals, corporations, and organizations nationwide. These individuals, corporations, and organizations were part of a regression sample that further supported the previous disparate impact study.

The Instrument: The assessment is an occupational assessment developed as an individual employment selection and management development tool for large and small-profit and non-profit organizations. Years of research indicate that people generally tend to fail on the job because of the environment in which they are placed, not due to a lack of skills or competence. The assessment has proven to be valid, accurate, objective, and unbiased and is used to help put the right person in the right job.

The Method

The method involved examining each participant’s assessment results and comparing them to specific job requirements, skills, and core competencies for a particular job. The requirements, skills, and core competencies were predetermined to avoid skewing the results.


A validation study was conducted to establish that the assessment would meet its construct and measure its intended purpose. This was established in the following ways:

• Construct validation strategies

• Criterion-related validation strategies

Construct validation strategies

A content validation strategy requires the researcher to show a logical or judgment-based relationship between characteristics measured by an instrument in relation to the job requirements.

Criterion-related validation strategies

The criterion-related validity study required the assessment to establish an empirical relationship between assessment results and criteria based on job performance. This relationship is expressed as a correlation (between test scores and criterion performance). This aspect of the study demonstrated a direct relationship between test selection and job performance.

Aggregate Statistics

Within each grouping of variables, major statistics (e.g., validity coefficients) were aggregated by computing a weighted average and standard deviation across study results. Results are weighted by study sample size.

The weighted average indicates the best estimate of the assessed population value for the statistic (e.g., the relationship between test results and job performance).

The weighted standard deviation was used to compute confidence intervals about the weighted average; the confidence intervals were then used for statistical significance tests on the weighted average.


The statistic used in this analysis was the standard t-test using pooled variance techniques, which looks at the differences between the means of two groups. More sophisticated multi-variance techniques were initially considered, but due to the straightforward nature of the results, these analyses were considered unnecessary and potentially confusing.

For the t-test analysis, a statistically significant difference between two groups on an assessment dimension would indicate disparate impact within the assessment process. The results of this analysis are undeniably significant and reveal the following:  (t = 2.01, p<.05).


The purpose of the current study was to investigate the potential presence of statistically significant differences between average responses to the assessment by its respondents. By utilizing the t-test statistic and comparing average scores for each dimension represented by the assessment instrument, there was no overall pattern of results favoring a particular group. Likewise, of all dimensions tested, there was no pattern of results favoring one particular subgroup. Based on these findings, no consistent pattern of disparate impact emerged in this study, indicating that the assessment instrument is sound and disparate impact in the employment setting is unlikely.