Carl R. Oliver (oliveca@earthlink.net) is a retired Corporate Ethics Officer, now Senior Lecturer at Loyola Marymount University in Los Angeles, CA. He is a coauthor of the book Business Ethics: The Path to Certainty.
Observation indicates that many companies could measure the effects of business ethics training better. Perspective comes from the U.S. Federal Sentencing Guidelines for Organizations (USSG),[1] which set the de facto minimum standards for US business ethics, and the widely used Kirkpatrick Model for evaluating training programs.[2] Two specific underused measures are qualitative content analysis of ethics intake reports and annual employee interviews to audit the ethics element of corporate culture. Part of the solution is switching measurement from return on investment (ROI) to return on expectations (ROE).
Measure for Measure
The difference between organizational climate and culture is well recognized. Shakespeare captured it as early as 1604 in Measure for Measure, when he had Isabella say, “‘Tis set down so in heaven [climate], but not in earth [culture].”[3] The paper ethics program is organizational climate—what a company says employees should do. What employees actually do is organizational culture. When trying to measure the effects of ethics training, it is easy to end up measuring corporate climate—the book answers employees have learned. What should be measured is the corporate culture reality—what employees’ ethics really are.
For management oversight, four fundamental questions are:
-
What do we do?
-
Why do we do it?
-
Do we have to do it?
-
Can we do it better?
What do we do?
For business ethics training, companies often list employees who attend ethics training by name, collect end-of-course reaction surveys from attendees, and score attendees’ end-of-course knowledge exams.
Why do we do it?
Business ethics are a key element of corporations’ fiduciary duties,[4] so effective business ethics training can be viewed as vital. From an ROI perspective, the expectation—at least a hope—is that effective training will prevent ethics violations and thereby eliminate time-wasting and expensive investigation, litigation, and penalties.
However, the ROI perspective is limited by a measurement problem: No one can truly know how many ethics violations training prevents. They can’t be counted, much less quantified in dollars. Traditional ROI evaluation of the effects of ethics training is an estimate at best. More likely, it’s a wild guess.
Do we have to do it?
No law requires measuring the effects of business ethics training. However, some contracts may, and the USSG, which set de facto minimum standards for US business ethics, expect companies to be able to prove they achieved them.
In 1991, the USSG published—and amended in 2004—a carrot-and-stick incentive for organizations to self-regulate their ethics. As the stick, the USSG instructed federal judges to severely punish organizations found guilty of ethics violations. As the carrot, the USSG instructed judges to markedly reduce punishment if the guilty organizations could prove they already had an effective ethics program in place.
In briefest form, the USSG set this highest-level business ethics expectation: “an organizational culture that encourages ethical conduct and a commitment to compliance with the law.”[5] This stands as the organization mission statement for ethics and the fundamental expectation the effects of business ethics training should measure.
Can we do it better?
Listing attendees may be useful to prove an employee was present at a training, but it is essentially just an inventory record, because sometimes people who are present in body are elsewhere mentally. More sophisticated measures are needed. The widely used Kirkpatrick Model for evaluating training programs identifies four levels of measurement:
-
Reaction,
-
Learning,
-
Behavior, and
-
Results.
End-of-course reaction surveys,often called smile sheets, are level one evaluations. They measure how favorably participants react to the course and the instructor.
This can be valuable. For example, one course was taught by a doctor of philosophy, using a script from a major commercial training company. While evaluating the response, it was noticed that the participants’ reactions were terrible. Many said, “Worst course I’ve ever had.” So, before the same instructor taught the course again, one small change was made to the script. Instead of asking participants, “What problems are you experiencing?” the question was changed to, “What solutions have you found to problems you are experiencing?” Thereafter, participants rated the same course great. Many said it was the best course they had ever taken.
But these reactions do not measure or quantify the business effects of ethics training.
End-of-course knowledge exams or their more sophisticated cousins, paired pre- and post-course exams, are level two evaluations that aim to measure the knowledge, skills, and attitudes participants gained during training. These usually are a standard feature of computer-based training courses.
They can be valuable. At the end of one course, exams showed most participants were unable to accurately draw an ethics decision-making model. The instructor then made changes to the course, adding homework and an in-class pop quiz specifically addressing the model. Thereafter, most participants accurately drew the model on the final exam.
End-of-course knowledge exams yield a snapshot of each participant’s level of knowledge at that moment. For business ethics, no knowledge norms have been established. What knowledge is measured is more arbitrary than comprehensive. Such exams more likely reflect the test-maker’s ideas of what is important. The content tested may sample knowledge of standards and procedures of the organization’s ethics program and may include ethics scenarios to resolve.
But these are momentary, onetime measures of what students know now. Momentary knowledge can be affected by forgetting and decay of learning. Moreover, good practice is to build ethics training around a year-round campaign model. Constantly pepper employees with ethics messages to engage and refresh their active interest and achieve real behavior changes. This approach takes lessons from successful political campaigns: stay on message, be repeated, and be delivered by multiple vehicles.[6] End-of-course tests do not measure or quantify the business effects of ethics training accumulated over the long term.