Skip to main content
Upskill your entire organisation with... Enterprise! The affordable learning management solution

Training Programme Evaluation

This section begins with an introduction to training and learning evaluation, including some useful learning reference models. 

The introduction also explains that for training evaluation to be truly effective, the training and development itself must be appropriate for the person and the situation

Good modern personal development and evaluation extend beyond the obvious skills and knowledge required for the job or organisation or qualification. 

Effective personal development must also consider: individual potential (natural abilities often hidden or suppressed); individual learning styles; and whole person development (life skills, in other words). 

Where training or teaching seeks to develop people (rather than merely being focused on a specific qualification or skill), the development must be approached on a more flexible and individual basis than in traditional paternalistic (authoritarian, prescribed) methods of design, delivery and testing. 

These principles apply to teaching and developing young people too, which interestingly provides some useful lessons for workplace training, development and evaluation. 


Introduction

A vital aspect of any sort of evaluation is its effect on the person being evaluated.

Feedback is essential for people to know how they are progressing, and also, evaluation is crucial to the learner's confidence too.

And since people's commitment to learning relies so heavily on confidence and a belief that the learning is achievable, the way that tests and assessments are designed and managed, and results presented back to the learners, is a very important part of the learning and development process.

People can be switched off the whole idea of learning and development very quickly if they receive only negative critical test results and feedback. Always look for positives in negative results. Encourage and support - don't criticise without adding some positives, and certainly never focus on failure, or that's just what you'll produce.

This is a much overlooked factor in all sorts of evaluation and testing, and since this element is not typically included within evaluation and assessment tools the point is emphasised point loud and clear here.

So always remember - evaluation is not just for the trainer or teacher or organisation or policy-makers - evaluation is absolutely vital for the learner too, which is perhaps the most important reason of all for evaluating people properly, fairly, and with as much encouragement as the situation allows.

Most of the specific content and tools below for workplace training evaluation is based on the work of Leslie Rae, an expert and author on the evaluation of learning and training programmes, and this contribution is greatly appreciated. W Leslie Rae has written over 30 books on training and the evaluation of learning - he is an expert in his field. His guide to the effective evaluation of training and learning, training courses and learning programmes, is a useful set of rules and techniques for all trainers and HR professionals.

This training evaluation guide is augmented by an excellent set of free learning evaluation and follow-up tools, created by Leslie Rae.

There are other training evaluation working files on the free resources page.

It is recommended that you read this article before using the free evaluation and training follow-up tools.

Particularly see the notes on this page about using self-assessment in measuring abilities before and after training (i.e., skills improvement and training effectiveness) which specifically relate to the 3-Test tool (explained and provided below).

See also the section on Donald Kirkpatrick's training evaluation model, which represents fundamental theory and principles for evaluating learning and training.

Also see Bloom's Taxonomy of learning domains, which establishes fundamental principles for training design and evaluation of learning, and thereby, training effectiveness.

Erik Erikson's Psychosocial (Life Stages) Theory is very helpful in understanding how people's training and development needs change according to age and stage of life. These generational aspects are increasingly important in meeting people's needs (now firmly a legal requirement within age discrimination law) and also in making the most of what different age groups can offer work and organisations. Erikson's theory is helpful particularly when considering broader personal development needs and possibilities outside of the obvious job-related skills and knowledge.

Multiple Intelligence theory (section includes free self-tests) is extremely relevant to training and learning. This model helps address natural abilities and individual potential which can be hidden or suppressed in many people (often by employers).

Learning Styles theory is extremely relevant to training and teaching, and features in Kolb's model, and in the VAK learning styles model (also including a free self-test tool). Learning Styles theory also relates to methods of assessment and evaluation, in which inappropriate testing can severely skew results. Testing, as well as delivery, must take account of people's learning styles, for example some people find it very difficult to prove their competence in a written test, but can show remarkable competence when asked to give a physical demonstration. Text-based evaluation tools are not the best way to assess everybody.

The Conscious Competence learning stages theory is also a helpful perspective for learners and teachers. The model helps explain the process of learning to trainers and to learners, and is also helps to refine judgements about competence, since competence is rarely a simple question of 'can or cannot'. The Conscious Competence model particularly provides encouragement to teachers and learners when feelings of frustration arise due to apparent lack of progress. Progress is not always easy to see, but can often be happening nevertheless.


Lessons from Children's Education

While these various theories and models are chiefly presented here for adult work-oriented training, the principles also apply to children's and young people's education, which provides some useful fundamental lessons for workplace training and development.

Notably, while evaluation and assessment are vital of course (because if you can't measure it you can't manage it) the most important thing of all is to be training and developing the right things in the right ways. Assessment and evaluation (and children's testing) will not ensure effective learning and development if the training and development has not been properly designed in the first place.

Lessons for the workplace are everywhere you look within children's education, so please forgive this diversion..

If children's education in the UK ever actually worked well, successive governments managed to wreck it by the 1980s and have made it worse since then. This was achieved by the imposition of a ridiculously narrow range of skills and delivery methods, plus similarly narrowly-based testing criteria and targets, and a self-defeating administrative burden. 

All of this perfectly characterises arrogance and delusion found in X-Theory management structures, in this case of high and mighty civil servants and politicians, who are not in the real world, and who never went to normal school and whose kids didn't either. A big lesson from this for organisations and workplace training is that X-Theory directives and narrow-mindedness are a disastrous combination. 

Incidentally, according to some of these same people, society is broken and our schools and parents are to blame and are responsible for sorting out the mess. Blaming the victims is another classic behaviour of inept governance. Society is not broken; it just lacks some proper responsible leadership, which is another interesting point:

The quality of any leadership (government or organisation) is defined by how it develops its people. Good leaders have a responsibility to help people understand, develop and fulfil their own individual potential. This is very different to just training them to do a job, or teaching them to pass an exam and get into university, which ignores far more important human and societal needs and opportunities.

Thankfully modern educational thinking (and let's hope policy too) now seems to be addressing the wider development needs of the individual child, rather than aiming merely to transfer knowledge in order to pass tests and exams. 

Knowledge transfer for the purpose of passing tests and exams, especially when based on such an arbitrary and extremely narrow idea of what should be taught and how, has little meaning or relevance to the development potential and needs of most young people, and even less relevance to the demands and opportunities of the real modern world, let alone the life skills required to become a fulfilled confident adult able to make a positive contribution to society.

The desperately flawed UK children's education system of the past thirty years, and its negative impacts on society, offer many useful lessons for organisations. 

Perhaps most significantly, if you fail to develop people as individuals, and only aim to transfer knowledge and skills to meet the organisational priorities of the day, then you will seriously hamper your chances of fostering a happy productive society within your workforce, assuming you want to, which I guess is another subject altogether.

Assuming you do want to develop a happy and productive workforce, it's useful to consider and learn from the mistakes that have been made in children's education:

  • the range of learning is far too narrowly defined and ignores individual potential, which is then devalued or blocked
  • the range of learning focuses on arbitrary criteria set from the policy-makers' own perspectives (classic arrogant X-Theory management - it's stifling and suppressive)
  • policy-makers give greatest or exclusive priority to the obvious 'academic' intelligences (reading, writing, arithmetic, etc), when other of the multiple intelligences (notably interpersonal and intrapersonal capabilities, helpfully encompassed by emotional intelligence) arguably have a far bigger value in work and society (and certainly cause more problems in work and society if under-developed)
  • testing and assessment of learners and teachers is measuring the wrong things, too narrowly, in the wrong way - like measuring the weather with a thermometer
  • testing (the wrong sort, although none would be appropriate for this) is used to assess and pronounce people's fundamental worth - which quite obviously directly affects self-esteem, confidence, ambition, dreams, life purpose, etc (nothing too serious then..)
  • wider individual development needs - especially life needs - are ignored (many organisations and educational policy-makers seem to think that people are robots and that their work and personal lives are not connected; and that work is unaffected by feelings of well-being or depression, etc)
  • individual learning styles are ignored (learning is delivered mainly through reading and writing when many people are far better at learning through experience, observation, etc - again see Kolb and  VAK)
  • testing and assessment focuses on proof of knowledge in a distinctly unfair situation only helpful to certain types of people, rather than assessing people's application, interpretation and development of capabilities, which is what real life requires (see Kirkpatrick's model - and consider the significance of assessing what people do with their improved capability, beyond simply assessing whether they've retained the theory, which means relatively very little)
  • children's education has traditionally ignored the fact that developing confident happy productive people is much easier if primarily you help people to discover what they are good at - whatever it is - and then building on that.

Teaching, training and learning must be aligned with individual potential, individual learning styles, and wider life development needs, and this wide flexible individual approach to human development is vital for the workplace, just as it is for schools.

Returning to consider workplace training itself, and the work of Leslie Rae:


Evaluation of Workplace Learning and Training

There have been many surveys on the use of evaluation in training and development (see the research findings extract example below). While surveys might initially appear heartening, suggesting that many trainers/organisations use training evaluation extensively, when more specific and penetrating questions are asked, it if often the case that many professional trainers and training departments are found to use only 'reactionnaires' (general vague feedback forms), including the invidious 'Happy Sheet' relying on questions such as 'How good did you feel the trainer was?', and 'How enjoyable was the training course?'. As Kirkpatrick, among others, teaches us, even well-produced reactionnaires do not constitute proper validation or evaluation of training.

For effective training and learning evaluation, the principal questions should be:

  • To what extent were the identified training needs objectives achieved by the programme?
  • To what extent were the learners' objectives achieved?
  • What specifically did the learners learn or be usefully reminded of?
  • What commitment have the learners made about the learning they are going to implement on their return to work?

And back at work,

  • How successful were the trainees in implementing their action plans?
  • To what extent were they supported in this by their line managers?
  • To what extent has the action listed above achieved a Return on Investment (ROI) for the organisation, either in terms of identified objectives satisfaction or, where possible, a monetary assessment.

Organisations commonly fail to perform these evaluation processes, especially where:

  • The HR department and trainers, do not have sufficient time to do so, and/or
  • The HR department does not have sufficient resources - people and money - to do so.

Obviously, the evaluation cloth must be cut according to available resources (and the culture atmosphere), which tend to vary substantially from one organisation to another. 

The fact remains that good methodical evaluation produces a good reliable data; conversely, where little evaluation is performed, little is ever known about the effectiveness of the training.


Evaluation of Training

There are the two principal factors which need to be resolved:

  • Who is responsible for the validation and evaluation processes?
  • What resources of time, people and money are available for validation/evaluation purposes? (Within this, consider the effect of variation to these, for instance an unexpected cut in budget or manpower. In other words anticipate and plan contingency to deal with variation.)

Responsibility for Evaluation 

Traditionally, in the main, any evaluation or other assessment has been left to the trainers "because that is their job..." My (Rae's) contention is that a 'Training Evaluation Quintet' should exist, each member of the Quintet having roles and responsibilities in the process (see 'Assessing the Value of Your Training', Leslie Rae, Gower, 2002). Considerable lip service appears to be paid to this, but the actual practice tends to be a lot less.

The 'Training Evaluation Quintet' advocated consists of:

  • senior management
  • the trainer
  • line management
  • the training manager
  • the trainee

Each has their own responsibilities, which are detailed next.


Senior Management

  • Awareness of the need and value of training to the organisation.
  • The necessity of involving the Training Manager (or equivalent) in senior management meetings where decisions are made about future changes when training will be essential.
  • Knowledge of and support of training plans.
  • Active participation in events.
  • Requirement for evaluation to be performed and require regular summary report.
  • Policy and strategic decisions based on results and ROI data.

The Trainer

  • Provision of any necessary pre-programme work etc and programme planning.
  • Identification at the start of the programme of the knowledge and skills level of the trainees/learners.
  • Provision of training and learning resources to enable the learners to learn within the objectives of the programme and the learners' own objectives.
  • Monitoring the learning as the programme progresses.
  • At the end of the programme, assessment of and receipt of reports from the learners of the learning levels achieved.
  • Ensuring the production by the learners of an action plan to reinforce, practise and implement learning.

The Line Manager

  • Work-needs and people identification.
  • Involvement in training programme and evaluation development.
  • Support of pre-event preparation and holding briefing meetings with the learner.
  • Giving ongoing, and practical, support to the training programme.
  • Holding a debriefing meeting with the learner on their return to work to discuss, agree or help to modify and agree action for their action plan.
  • Reviewing the progress of learning implementation.
  • Final review of implementation success and assessment, where possible, of the ROI.

The Training Manager

  • Management of the training department and agreeing the training needs and the programme application
  • Maintenance of interest and support in the planning and implementation of the programmes, including a practical involvement where required
  • The introduction and maintenance of evaluation systems, and production of regular reports for senior management
  • Frequent, relevant contact with senior management
  • Liaison with the learners' line managers and arrangement of learning implementation responsibility learning programmes for the managers
  • Liaison with line managers, where necessary, in the assessment of the training ROI.

The Trainee or Learner

  • Involvement in the planning and design of the training programme where possible
  • Involvement in the planning and design of the evaluation process where possible
  • Obviously, to take interest and an active part in the training programme or activity.
  • To complete a personal action plan during and at the end of the training for implementation on return to work, and to put this into practice, with support from the line manager.
  • Take interest and support the evaluation processes.

Although the principal role of the trainee in the programme is to learn, the learner must be involved in the evaluation process. This is essential, since without their comments much of the evaluation could not occur. Neither would the new knowledge and skills be implemented. For trainees to neglect either responsibility the business wastes its investment in training. Trainees will assist more readily if the process avoids the look and feel of a paper-chase or number-crunching exercise. Instead, make sure trainees understand the importance of their input - exactly what and why they are being asked to do. 


Training Evaluation and Validation Options

As suggested earlier what you are able to do, rather than what you would like to do or what should be done, will depend on the various resources and culture support available. The following summarises a spectrum of possibilities within these dependencies.


1 - Do Nothing

Doing nothing to measure the effectiveness and result of any business activity is never a good option, but it is perhaps justifiable in the training area under the following circumstances:

  • If the organisation, even when prompted, displays no interest in the evaluation and validation of the training and learning - from the line manager up to to the board of directors.
  • If you, as the trainer, have a solid process for planning training to meet organisational and people-development needs.
  • If you have a reasonable level of assurance or evidence that the training being delivered is fit for purpose, gets results, and that the organisation (notably the line managers and the board, the potential source of criticism and complaint) is happy with the training provision.
  • You have far better things to do than carry out training evaluation, particularly if evaluation is difficult and cooperation is sparse.

However, even in these circumstances, there may come a time when having kept a basic system of evaluation will prove to be helpful, for example:

  • You receive have a sudden unexpected demand for a justification of a part or all of the training activity. (These demands can spring up, for example with a change in management, or policy, or a new initiative).
  • You see the opportunity or need to produce your own justification (for example to increase training resource, staffing or budgets, new premises or equipment).
  • You seek to change job and need evidence of the effectiveness of your past training activities.

Doing nothing is always the least desirable option. At any time somebody more senior to you might be moved to ask "Can you prove what you are saying about how successful you are?" Without evaluation records you are likely to be at a loss for words of proof...


2 - Minimal Action

The absolutely basic action for a start of some form of evaluation is as follows:

At the end of every training programme, give the learners sufficient time and support in the form of programme information, and have the learners complete an action plan based on what they have learned on the programme and what they intend to implement on their return to work. 

This action plan should not only include a description of the action intended but comments on how they intend to implement it, a timescale for starting and completing it, and any resources required, etc. 

A fully detailed action plan always helps the learners to consolidate their thoughts. The action plan will have a secondary use in demonstrating to the trainers, and anyone else interested, the types and levels of learning that have been achieved. The learners should also be encouraged to show and discuss their action plans with their line managers on return to work, whether or not this type of follow-up has been initiated by the manager.


3 - Minimal Desirable Action leading to Evaluation

When returning to work to implement the action plan the learner should ideally be supported by their line manager, rather than have the onus for implementation rest entirely on the learner. 

The line manager should hold a debriefing meeting with the learner soon after their return to work, covering a number of questions, basically discussing and agreeing the action plan and arranging support for the learner in its implementation. 

As described earlier, this is a clear responsibility of the line manager, which demonstrates to senior management, the training department and, certainly not least, the learner, that a positive attitude is being taken to the training. Contrast this with, as often happens, a member of staff being sent on a training course, after which all thoughts of management follow-up are forgotten.

The initial line manager debriefing meeting is not the end of the learning relationship between the learner and the line manager. At the initial meeting, objectives and support must be agreed, then arrangements made for interim reviews of implementation progress. After this when appropriate, a final review meeting needs to consider future action.

This process requires minimal action by the line manager - it involves no more than the sort of observations being made as would be normal for a line manager monitoring the actions of his or her staff. This process of review meetings requires little extra effort and time from the manager, but does much to demonstrate at the very least to the staff that their manager takes training seriously.


4 - Training Programme Basic Validation Approach

The action plan and implementation approach described in (3) above is placed as a responsibility on the learners and their line managers, and, apart from the provision of advice and time, do not require any resource involvement from the trainer. 

There are two further parts of an approach which also require only the provision of time for the learners to describe their feelings and information. The first is the reactionnaire which seeks the views, opinions, feelings, etc., of the learners about the programme. This is not at a 'happy sheet' level, nor a simple tick-list - but one which allows realistic feelings to be stated.

This sort of reactionnaire is described in the book ('Assessing the Value of Your Training', Leslie Rae, Gower, 2002). This evaluation seeks a score for each question against a 6-point range of Good to Bad, and also the learners' own reasons for the scores, which is especially important if the score is low.

Reactionnaires should not be automatic events on every course or programme. This sort of evaluation can be reserved for new programmes (for example, the first three events) or when there are indications that something is going wrong with the programme.

Sample reactionnaires are available in the set of free training evaluation tools.

The next evaluation instrument, like the action plan, should be used at the end of every course if possible. This is the Learning Questionnaire (LQ), which can be a relatively simple instrument asking the learners what they have learned on the programme, what they have been usefully reminded of, and what was not included that they expected to be included, or would have liked to have been included. 

Scoring ranges can be included, but these are minimal and are subordinate to the text comments made by the learners. There is an alternative to the LQ called the Key Objectives LQ (KOLQ) which seeks the amount of learning achieved by posing the relevant questions against the list of Key Objectives produced for the programme. When a reactionnaire and LQ/KOLQ are used, they must not be filed away and forgotten at the end of the programme, as is the common tendency, but used to produce a training evaluation and validation summary. 

A factually-based evaluation summary is necessary to support claims that a programme is good/effective/satisfies the objectives set'. Evaluation summaries can also be helpful for publicity for the training programme, etc.

Example Learning Questionnaires and Key Objectives Learning Questionnaires are included in the set of free evaluation tools.


5 - Total Evaluation Process

If it becomes necessary the processes described in (3) and (4) can be combined and supplemented by other methods to produce a full evaluation process that covers all eventualities. Few occasions or environments allow this full process to be applied, particularly when there is no Quintet support, but it is the ultimate aim. The process is summarised below:

  • Training needs identification and setting of objectives by the organisation
  • Planning, design and preparation of the training programmes against the objectives
  • Pre-course identification of people with needs and completion of the preparation required by the training programme
  • Provision of the agreed training programmes
  • Pre-course briefing meeting between learner and line manager
  • Pre-course or start of programme identification of learners' existing knowledge, skills and attitudes, ('3-Test' before-and-after training example tool and  manual version (pdf) and manual version (xls) and  working file version - (I am grateful to F Tarek for sharing this pdf file - Arabic translation 'three-test' version and the same tool as a  doc file - Arabic translation 'three-test' version).
  • Interim validation as programme proceeds
  • Assessment of terminal knowledge, skills, etc., and completion of perceptions/change assessment ('3-Test' example tooland manual version and  working file version)
  • Completion of end-of-programme reactionnaire
  • Completion of end-of-programme Learning Questionnaire or Key Objectives Learning Questionnaire
  • Completion of Action Plan
  • Post-course debriefing meeting between learner and line manager
  • Line manager observation of implementation progress
  • Review meetings to discuss progress of implementation
  • Final implementation review meeting
  • Assessment of ROI

Whatever you do, do something. 

The processes described above allow considerable latitude depending on resources and culture environment, so there is always the opportunity to do something - obviously the more tools used and the wider the approach, the more valuable and effective the evaluation will be. 

However, be pragmatic. Large expensive critical programmes will always justify more evaluation and scrutiny than small, one-off, non-critical training activities. 

Where there's a heavy investment and expectation, so the evaluation should be sufficiently detailed and complete. Training managers particularly should clarify measurement and evaluation expectations with senior management prior to embarking on substantial new training activities, so that appropriate evaluation processes can be established when the programme itself is designed.

Where large and potentially critical programmes are planned, training managers should err on the side of caution - ensure adequate evaluation processes are in place. As with any investment, a senior executive is always likely to ask, "What did we get for our investment?", and when he asks, the training manager needs to be able to provide a fully detailed response.


Measuring Improvement Using Self-Assessment

The '3-Test' before-and-after training example (see manual version (pdf) and  manual version (xls) and working file version) is a useful tool and helpful illustration of the challenge in measuring improvement in ability after training, using self-assessment.

A vital element within the tool is the assessment called 'revised pre-trained ability', which is carried out after training.

The 'revised pre-trained ability' is a reassessment to be carried out after training of the ability level that existed before training.

This will commonly be significantly different to the ability assessment made before training, because by implication, we do not fully understand competence and ability in a skill/area before we are trained in it.

People commonly over-estimate their ability before training. After training many people realise that they actually had lower competence than they first believed (i.e., before receiving the training).

It is important to allow for this when attempting to measure real improvement using self-assessment. This is the reason for revising (after training) the pre-trained assessment of ability.

Additionally, in many situations after training, people's ideas of competence in a particular skill/area can expand hugely. They realise how big and complex the subject is and they become more conscious of their real ability and opportunities to improve. Because of this it is possible for a person before training to imagine (in ignorance) that they have a competence level of say 7 out of 10. After training their ability typically improves, but also so does their awareness of the true nature of competency, and so they may then judge themselves - after training - only to be say 8 or 7 or even 'lower' at 6 out of 10.

This looks like a regression. It's not of course, which is why a reassessment of the pre-trained ability is important. Extending the example, a person's revised assessment of their pre-trained ability could be say 3 or 4 out of 10 (revised downwards from 7/10), because now the person can make an informed (revised) assessment of their actual competence before training.

A useful reference model in understanding this is the Conscious Competence learning model. Before we are trained we tend to be unconsciously incompetent (unaware of our true ability and what competence actually is). After training we become more consciously aware of our true level of competence, as well as hopefully becoming more competent too. When we use self-assessment tools it is important to allow for this, hence the design of the '3-Test' before-and-after training tool - see also manual version (pdf) and  manual version (xls).

In other words: In measuring improvement, using self-assessment, between before and after training it is useful first to revise our pre-trained assessment, because before training usually our assessment of ability is over-optimistic, which can suggest (falsely) an apparent small improvement or even regression (because we thought we were more skilled than we actually now realise that we were).

Note that this self-assessment aspect of learning evaluation is only part of the overall evaluation which can be addressed. See Kirkpatrick's learning evaluation model for a wider appreciation of the issues.

I am grateful to F Tarek for sharing this pdf file - Arabic translation 'three-test' version and the same tool as a doc file - Arabic translation 'three-test' version.


The Trainer's Overall Responsibilities - Aside from Training Evaluation

Over the years the trainer's roles have changed, but the basic purpose of the trainer is to provide efficient and effective training programmes. The following suggests the elements of the basic role of the trainer, but it must be borne in mind that different circumstances will require modifications of these activities.

1. The basic role of a trainer (or however they may be designated) is to offer and provide efficient and effective training programmes aimed at enabling the participants to learn the knowledge, skills and attitudes required of them.

2. A trainer plans and designs the training programmes, or otherwise obtains them (for example, distance learning or e-technology programmes on the Internet or on CD/DVD), in accordance with the requirements identified from the results of a TNIA (Training Needs Identification and Analysis - or simply TNA, Training Needs Analysis) for the relevant staff of an organisations or organisations.

3. The training programmes cited at (1) and (2) must be completely based on the TNIA which has been: (a) completed by the trainer on behalf of and at the request of the relevant organisation (b) determined in some other way by the organisation.

4. Following discussion with or direction by the organisation management who will have taken into account costs and values (e.g. ROI - Return on Investment in the training), the trainer will agree with the organisation management the most appropriate form and methods for the training.

5 . If the appropriate form for satisfying the training need is a direct training course or workshop, or an Intranet provided programme, the trainer will design this programme using the most effective approaches, techniques and methods, integrating face-to-face practices with various forms of e-technology wherever this is possible or desirable.

6. If the appropriate form for satisfying the training need is some form of open learning programme or e-technology programme, the trainer, with the support of the organisation management obtain, plan the utilisation and be prepared to support the learner in the use of the relevant materials.

7. The trainer, following contact with the potential learners, preferably through their line managers, to seek some pre-programme activity and/or initial evaluation activities, should provide the appropriate training programme(s) to the learners provided by their organisation(s). During and at the end of the programme, the trainer should ensure that: (a) an effective form of training/learning validation is followed (b) the learners complete an action plan for implementation of their learning when they return to work.

8. Provide, as necessary, having reviewed the validation results, an analysis of the changes in the knowledge, skills and attitudes of the learners to the organisation management with any recommendations deemed necessary. The review would include consideration of the effectiveness of the content of the programme and the effectiveness of the methods used to enable learning, that is whether the programme satisfied the objectives of the programme and those of the learners.

9. Continue to provide effective learning opportunities as required by the organisation.

10. Enable their own CPD (Continuing Professional Development) by all possible developmental means - training programmes and self-development methods.

11. Arrange and run educative workshops for line managers on the subject of their fulfilment of their training and evaluation responsibilities.

Dependant on the circumstances and the decisions of the organisation management, trainers do not, under normal circumstances:

1. Make organisational training decisions without the full agreement of the organisational management.

2. Take part in the post-programme learning implementation or evaluation unless the learners' line managers cannot or will not fulfil their training and evaluation responsibilities.

Unless circumstances force them to behave otherwise, the trainer's role is to provide effective training programmes and the role of the learners' line managers is to continue the evaluation process after the training programme, counsel and support the learner in the implementation of their learning, and assess the cost-value effectiveness or (where feasible) the ROI of the training. 

Naturally, if action will help the trainers to become more effective in their training, they can take part in but not run any pre- and post-programme actions as described, always remembering that these are the responsibilities of the line manager.


A Note about ROI (Return on Investment) in Training

Attempting financial ROI assessment of training is a controversial issue. 

It's a difficult task to do in absolute terms due to the many aspects to be taken into account, some of which are very difficult to quantify at all, let alone to define in precise financial terms. Investment - the cost - in training may be easier to identify, but the benefits - the return - are notoriously tricky to pin down. What value do you place on improved morale? Reduced stress levels? Longer careers? Better qualified staff? Improved time management? All of these can be benefits - returns - on training investment. Attaching a value and relating this to a single cause, i.e., training, is often impossible. At best therefore, many training ROI assessments are necessarily 'best estimates'.

If ROI-type measures are required in areas where reliable financial assessment is not possible, it's advisable to agree a 'best possible' approach, or a 'notional indicator' and then ensure this is used consistently from occasion to occasion, year on year, course to course, allowing at least a comparison of like with like to be made, and trends to be spotted, even if financial data is not absolutely accurate.

In the absence of absolutely quantifiable data, find something that will provide a useful if notional indication. For example, after training sales people, the increased number and value of new sales made is an indicator of sorts. After motivational or team-building training, reduced absentee rates would be an expected output. After an extensive management development programme, the increase in internal management promotions would be a measurable return. 

Find something to measure, rather than say it can't be done at all, but be pragmatic and limit the time and resource spent according to the accuracy and reliability of the input and output data. Also, refer to the very original Training Needs Analysis that prompted the training itself - what were the business performance factors that the training sought to improve? Use these original drivers to measure and relate to organisational return achieved.

The problems in assessing ROI are more challenging in public and non-profit-making organisations - government departments, charities, voluntary bodies, etc. ROI assessment in these environments can be so difficult as to be insurmountable, so that the organisation remains satisfied with general approximations or vague comparisons, or accepts wider forms of justification for the training without invoking detailed costing.

None of this is to say that cost- and value-effectiveness assessment should not be attempted. At the very least, direct costs must be controlled within agreed budgets, and if it is possible, attempts at more detailed returns should be made.

It may be of some consolation to know that Jack Philips, an American ROI 'guru', recently commented about training ROI: "Organisations should be considering implementing ROI impact studies very selectively on only 5 to 10 per cent of their training programme, otherwise it becomes incredibly expensive and resource intensive."


Training Evaluation Research

This research extract is an example of the many survey findings that indicate the need to improve evaluation of training and learning. It is useful to refer to the Kirkpatrick Learning Evaluation model to appreciate the different stages at which learning and training effectiveness should be evaluated.

Research published the UK's British Learning Association in May 2006 found that 72% (of a representative sample) of the UK's leading learning professionals considered that learning tends not to lead to change.

Only 51% of respondents said that learning and training was evaluated several months after the learning or training intervention.

The survey was carried out among delegates of the 2006 conference of the UK's British Learning Association.

Speaking on the findings, David Wolfson, Chairman of the British Learning Association said, "These are worrying figures from the country's leading learning professionals. If they really do reflect training in the UK, then we have to think long and hard about how to make the changes that training is meant to give. It suggests that we have to do more - much more - to ensure that learning interventions really make a difference..."

The British Learning Association is a centre of expertise that produces best practice examples, identifies trends and disseminates information on both innovative and well-established techniques and technologies for learning. The aim is to synthesise existing knowledge, develop original solutions and disseminate this to a wide cross sector membership.


Summary

There are many different ways to assess and evaluate training and learning.

Remember that evaluation is for the learner too - evaluation is not just for the trainer or organisation.

Feedback and test results help the learner know where they are, and directly affect the learner's confidence and their determination to continue with the development - in some cases with their own future personal development altogether.

Central to improving training and learning is the question of bringing more meaning and purpose to people's lives, aside from merely focusing on skills and work-related development and training courses.

Learning and training enables positive change and improvement - for people and employers - when people's work is aligned with people's lives - their strengths, personal potential, goals and dreams - outside work as well as at work.

Evaluation of training can only effective if the training itself is effective and appropriate. Testing the wrong things in the wrong way will give you unhelpful data, and could be even more unhelpful for learners.

Consider people's learning styles when evaluating personal development. Learning styles are essentially a perspective of people's preferred working, thinking and communicating styles. Written tests do not enable all types of people to demonstrate their competence.

Evaluating retention of knowledge only is a very limited form of assessment. It will not indicate how well people apply their learning and development in practice. Revisit Kirkpatrick's Theory and focus as much as you can on how the learning and development is applied, and the change and improvements achieved, in the working situation.

See the notes about organisational change and ethical leadership to help understand and explain these principles further, and how to make learning and development more meaningful and appealing for people.

Leslie Rae: further references and recommended reading

Annett, Duncan, Stammers and Gray, Task Analysis, Training Information Paper 6, HMSO, 1971. 
Bartram, S. and Gibson, B., Training Needs Analysis, 2nd edition, Gower, 1997. 
Bartram, S. and Gibson, B., Evaluating Training, Gower, 1999. 
Bee, Frances and Roland, Training Needs Analysis and Evaluation, Institute of Personnel and Development, 1994. 
Boydell, T. H., A Guide to the Identification of Training Needs, BACIE, 1976. 
Boydell, T. H., A Guide to Job Analysis, BACIE, 1970. A companion booklet to A Guide to the Identification of Training Needs. 
Bramley, Peter, Evaluating Training Effectiveness, McGraw-Hill, 1990. 
Buckley, Roger and Caple, Jim, The Theory and Practice of Training, Kogan Page, 1990.(Chapters 8 and 9) 
Craig, Malcolm, Analysing Learning Needs, Gower, 1994. 
Davies, I. K., The Management of Learning, McGraw-Hill, 1971. (Chapters 14 and 15.) 
Easterby-Smith, M., Braiden, E. M. and Ashton, D., Auditing Management Development, Gower, 1980. 
Easterby-Smith, M., 'How to Use Repertory Grids in HRD', Journal of European Industrial Training, Vol 4, No 2, 1980. 
Easterby-Smith, M., Evaluating Management Development, Training and Education, 2nd edition, Gower, 1994. 
Fletcher, Shirley, NVQs Standards and Competence, 2nd edition, Kogan Page, 1994. 
Hamblin, A. C., The Evaluation and Control of Training, McGraw-Hill, 1974. 
Honey, P., 'The Repertory Grid in Action', Industrial and Commercial Training, Vol II, Nos 9, 10 and 11, 1979. 
ITOL, A Glossary of UK Training and Occupational Learning Terms, ed. J. Brooks, ITOL, 2000. 
Kelly, G.A., The Psychology of Personal Constructs, Norton, 1953. 
Kirkpatrick, D. L., 'Evaluation of Training', in Training and Development Handbook, edited by R. L. Craig, McGraw-Hill, 1976. 
Kirkpatrick, D.L., Evaluating Training Programs: The four levels, Berrett-Koehler, 1996. 
Laird, D., Approaches to Training and Development, Addison-Wesley, 1978. (Chapters 15 and 16.) 
Mager, R. F., Preparing Objectives for Programmed Instruction, Fearon, 1962. (Later re-titled: Preparing Instructional Objectives, Fearon, 1975.) 
Manpower Services Commission, 'A Glossary of Training Terms', HMSO, 1981. 
Newby, Tony, Validating Your Training, Kogan Page Practical Trainer Series, 1992. 
Odiorne, G. S., Training by Objectives, Macmillan, 1970. 
Parker, T. C., 'Statistical Methods for Measuring Training Results', in Training and Development Handbook, edited by R. L. Craig, McGraw-Hill, 1976. 
Peterson, Robyn, Training Needs Analysis in the Workplace, Kogan Page Practical Trainer Series, 1992. 
Philips, J. Handbook of Training Evaluation and Measurement, 3rd edition, Butterworth-Heinemann, 1977 
Philips, J. Return on Investment in training and Performance Improvement Programs. Butterworth-Heinemann, 1977 
Philips, P.P.P. Understanding the Basics of Return on Investment in Training, Kogan-Page,2002 
Prior, John (ed.), Handbook of Training and Development, 2nd edition, Gower, 1994. 
Rackham, N. and Morgan, T., Behaviour Analysis in Training, McGraw-Hill, 1977. 
Rackham, N. et al., Developing Interactive Skills, Wellens, 1971. 
Rae, L., 'Towards a More Valid End-of-Course Validation', The Training Officer, October 1983. 
Rae, L., The Skills of Human Relations Training, Gower, 1985. 
Rae, L., 'How Valid is Validation?', Industrial and Commercial Training, Jan.-Feb., 1985. 
Rae, L., Using Evaluation in Training and Development, Kogan Page, 1999. 
Rae, L., Effective Planning in Training and Development, Kogan Page, 2000. 
Rae, L., Training Evaluation Toolkit, Echelon Learning, 2001. 
Rae, L., Trainer Assessment, Gower, 2002. 
Rae, L., Techniques of Training, 3rd edition, Gower, 1995. (Chapter 10.) 
Robinson, K. R., A Handbook of Training Management, Kogan Page, 1981. (Chapter 7.) 
Schmalenbach, Martin, 'The Death of ROI and the Rise of a New Management Paradigm', 
Journal of the Institute of Training and Occupational Learning, Vol. 3, No.1, 2002. 
Sheal, P. R., How to Develop and Present Staff Training Courses, Kogan Page, 1989. 
Smith, M. and Ashton, D., 'Using Repertory Grid Techniques to Evaluate Management Training', Personnel Review, Vol 4, No 4, 1975. 
Stewart, V. and Stewart A., Managing the Manager's Growth, Gower, 1978. (Chapter 13.) 
Thurley, K. E., and Wirdenius, H., Supervision: a Re-appraisal, Heinemann, 1973. 
Warr, P. B., Bird, M. and Rackham, N., The Evaluation of Management Training, Gower, 1970. 
Whitelaw, M., The Evaluation of Management Training: a Review, Institute of Personnel Management, 1972. 
Wills, Mike, Managing the Training Process, McGraw-Hill, 1993.

The core content and tools relating to workplace training evaluation is based on the work of Leslie Rae, MPhil, Chartered FCIPD, FITOL, which is gratefully acknowledged. 

Leslie Rae welcomes comments and enquiries about the subject of training and its evaluation, and can be contacted via BusinessBalls or direct: Wrae804418@aol.com