Security Awareness Training: Your First Line of Defense (Part 4)

by [Published on 24 July 2013 / Last Updated on 24 July 2013]

In this final installment of the series, we’ll talk about evaluating the training effectiveness both in the short term and long term.

If you would like to read the other parts in this article series please go to:

Introduction

In Part 1 of this series, we discussed the importance of security awareness in today’s highly regulated workplaces and in Parts 2 and 3, respectively, we focused on how to develop your own in-house security awareness training program or how to hire an outside company to do the training for you. Regardless of which of those routes you decide to take, one of the most important (but often overlooked) steps comes after the class is over: evaluating its effectiveness both in the short term (through testing) and long term (through observation and documentation of changed employee behavior in relation to security practices). In this final installment of the series, we’ll talk about why and how to do that.

Evaluating student performance and progress

The traditional learning model includes both an instructional phase and an evaluation phase. The instructional phase takes up most of the time and incorporates the delivery of the course content by one or more methods (lecture, discussion, demonstration, hands-on experience, etc.). This is the phase during which learning takes place. During this phase, in a traditional model, the information flow is primarily from the instructor to the student.

The evaluation phase is much shorter and generally comes at the end of the course. In a traditional model, it comes in the form of a final exam, although it can also be interspersed throughout the course in the form of section exams or pop quizzes.

The evaluation can be administered as a written exam, an oral exam, a practical lab assessment (where the student has to demonstrate proficiency in a realistic setting), a project or some combination of these methods. There are many variations within each of these categories, as well; a written exam can be based on multiple choice, fill-in-the-blanks, true/false and/or essay questions. Any of these can be “open book,” where students are free to use their resources and notes or “closed book” where they must work from retained memory. Any of these evaluation types can be assessed on individuals or on teams working together.

A common problem with the typical lecture/exam scenario is the focus on the test grade by both the students and the instructor. This is inevitable when the test grade is the primary or only metric by which both student and teacher performance are measured. This leads to the practice of “teaching to the test” – that is, simply drilling students in how to regurgitate the “right answer” to specific questions.

This method may be appropriate for some subjects, such as when a child is learning the multiplication tables. Rote memorization works in that situation. It doesn’t work so well when the goal is to instill security awareness in adult workers. Secure computing requires users to be able to recognize a huge variety of threats – including some that are brand new – and assess the risks in different situations, then make intelligent and informed judgments on how to react (or not react). This isn’t something users can simply memorize. The other problem with memorizing for the test is that the learning usually isn’t fully absorbed (often because there is no real understanding of the concepts behind the “right answers”) so that it’s forgotten soon after it’s served its purpose (passing the test).

A better way of evaluating the learning of adults is to have them demonstrate their understanding and mastery of the learning objectives (remember those?) through role-play scenarios, labs, demonstrations or by putting together a presentation of what they’ve learned. However, this is much more time-consuming, for both the learner and the instructor who evaluates the performance/progress.

Remember that while an assessment instrument can provide valuable information about a student’s level of knowledge, it doesn’t really measure learning. The focus in security awareness training should be on working together as a team to protect the company and its computers from threats and attacks, rather than on getting high scores on a test.

Evaluating instruction

The second part of the end-of-course evaluation is directed at evaluation of the instructor and the course content. This is typically done through a questionnaire or survey completed by the students after the course is over. It’s most often passed out at the end of the last class, after the students finish their final tests, but this might not be the best way to do it.

At the end of the class, and especially right after the stress (for some) of taking a test, students are likely to be tired and eager to get out of the classroom. They may rush through the instructor evaluation. They may be so happy it’s over that they automatically give high scores. They may be so stressed and pessimistic about their test results that they automatically give low scores. They may feel that they can’t be honest about any negatives if the instructor is in the room while they complete the evaluation and is the one who collects them, even if they’re not required to sign their names.

You might get more honest and thoughtful evaluations if you have students take the forms home to complete and bring back in a day or two, and have them turned in to someone other than the instructor. This gives students some time to de-stress, to think about the course as a whole (rather than writing the eval based on the “in the moment” experience immediately following the test), and to take all the time they need to expand on the reasons for the scores they give.

There are two schools of thought regarding whether evaluations should be anonymous. Students, especially timid ones, are more likely to open up and express their true feelings about the course if they know they won’t be identified. On the other hand, some students may exaggerate negativity or speak out of their own biases if they don’t have to “own” what they say. If you do require that evaluations be signed, that information should be kept private and removed from the feedback given to the instructor.

The evaluation should have separate questions or sections for evaluating the instructor and the course content, as these are two separate issues. You can have an excellent instructor who’s stuck delivering a lousy curriculum that was put together by someone else. Likewise, the course material can be brilliant but the instructor who’s delivering it can stand there and read it to the class in a monotone.

Evaluating long term effectiveness of the program

What you’re really interested in is the long term effectiveness of the training program. How do you measure the impact the training has on the real-world security of your network? Security logs, records of help desk incidents related to security, reports from supervisors and IT and security staff on user behavioral data can give you metrics for analyzing how effect the training has been in changing behavior.

Before and after surveys of employees can help you to determine whether they are more aware of company policies related to IT security and their perception of their own roles in the overall security effort. You can find more information on metrics for evaluating an effective security culture in this Cisco whitepaper.

Of course, the ultimate test of the effectiveness of your security awareness training is how workers respond to a real attack or social engineering effort. As Ira Winkler points out in an article titled Security Awareness Can be the Most Cost-Effective Security Measure, it’s important to keep your expectations realistic. Like every other security measure, awareness training mitigates risk – it doesn’t remove it entirely. The true measure of whether your training program was effective is whether “the losses prevented by the training are more than the cost of the awareness program.”

Summary

The process of evaluation doesn’t exist in a vacuum. The purpose of evaluating learning progress, instruction and long term effects of training is to be able to supplement and improve on it. Security awareness is not a “set it and forget it” event. You can’t treat it as if it’s a lifetime vaccine that protects forever against security faux pas. Computer users will need booster shots on a regular basis. New technologies bring new methods of intrusion and attack.

Consider what a security awareness program would have looked like ten years ago. It would not have put much (if any) emphasis on things like mobile security and cloud security – yet today these are near the top of the priority list. IT is constantly evolving and security awareness training has to keep up. Assess the level of security awareness among your users on, at the very minimum, an annual basis. Remember that security is an ongoing commitment – on the part of management, IT and the users. And that’s one of the foundational key points that your security awareness training needs to get across.

If you would like to read the other parts in this article series please go to:

Featured Links