Here is a new approach to evaluation, by Nick Shackleton-Jones:
After the event, however, the system automatically prompts the chosen peer group to reassess those same behaviours at intervals of say one, three and six months. At the end of this period a ‘change score’ is calculated: an average value representing the amont of observed behavioural change that has taken place. Knowing that they are living up to the expectations of their peers, people make an effort to change. By coupling a meaningful challenge to the event, learners will endeavour to practise what they have learned – and we can skip directly to robust ‘level 3′ results.
Which is based on an old military approach, Training Validation, as exemplified by the Canadian Army:
On-job performance is an evaluation measure that aims to determine if the trainee has been able to transfer the knowledge, skills or attitude learned in the training environment to the real world of the job. Within the ASAT, on-job performance is an element of validation conducted after at least three months following the training event. There are many different approaches to conducting this form of evaluation. Trainees, superiors and peers can be queried both through written questionnaires and personal interviews, and the results can lead to measurement of the effectiveness of the course/training activity. The validation of individual training is the responsibility of Commander LFDTS.
Examination of trainee reaction, the learning process and the transfer of learning to the job against the anticipated results and resources expended is the final method of determining efficiency. This analysis is a strategic responsibility, and the results are used to modify the conduct of future courses/training.
These are good systems if training was the correct solution in the first place, but note that formal instruction only accounts for 5% of workplace learning. As I wrote in a previous post, these types of methods work very well when you know what you are trying to achieve and understand the systems you are operating in. They work well when you have established best or good practices to base the training on. But what happens in complex environments, when ”the relationship between cause and effect can only be perceived in retrospect, but not in advance”? This is the situation many workers find themselves in today.
Feedback and validation have to be part of our daily work, not just for training events. This is where PKM practices can help on a personal level, work narration for teams, and communities of practice for disciplines. Once again, work is learning and learning is the work.