Skip to main content
Bookmark and Share

Evaluating Employability Services

When your partnership begins a new project or way of working it is essential that the successes and challenges are accurately monitored and evaluated. This will in turn lead to positive changes and maximum impact.

This impact could be on anything from the partnership's culture to the service users, but always ultimately, the overall local and national employability figures.

In planning evaluation, it is important to be clear on the objectives of the service/ intervention. There are two basic questions to ask:

  • Can I increase the reach of my service?
  • Can I improve what the customer is getting out of it?

Once you are able to answer these questions expand your thinking to consider what funders, society, the economy or the Government may be able to gain through your service.

  • Consider soft and hard outcomes for the user - what they actually did as a result of the intervention, but also how they felt
  • Think wider than the objectives of the programme/intervention and consider any unintended benefits or consequences e.g. improved mental health
  • Consider the needs of the funder of the programme or intervention
  • Consider wider society as a whole e.g. impact on productivity or reduced costs of healthcare as a result of improved wellbeing.

Ideally you will plan your evaluation at the same time as you are planning your programme. The first step is to define success. There is no single answer to this, it will depend on the intervention, the time and resources you have available and what you can practically measure.

It is important to include both quantitative information (facts and figures) as well as qualitative information (case studies and customer views).

There are varying levels in terms of how robustly you can measure impact. The most appropriate for your intervention will depend on the scale of the intervention, feasibility and resources available to carry out the work.

  • Time series analysis - take a baseline measure at the beginning of the intervention and compare with the end results to identify progress.
  • Time series analysis with a control group to identify the added value of your intervention as opposed to other influencing factors - compare your client group with a client group with the same characteristics that did not experience the intervention. Ask the same questions at each stage.
  • Randomised control trial - As for 2 but the trial is done with sufficient numbers to ensure that the results are statistically significant. The control group is randomly selected. This is only suitable for fairly major projects.

Developed by WBS, the attached document provdes a simple how to guide to assist you with the evaluation process, covering the following key areas of consideration:

  • Why is Evaluation Helpful?
  • What is being Evaluated?
  • Who Should Conduct the Evaluation?
  • The Evaluation Plan
  • Choosing the Correct Evaluation Methods
  • Analysing the Evaluation Data
  • Reporting the Evaluation Results

There are three key factors that make up a robust evaluation.

  • Quantitative impact of the intervention/programme - e.g. action taken as a result - how many clients opened a bank account, took up a debt solution.
  • Qualitative measures of customer satisfaction/wider impact - did they understand the process? How were they treated by staff? Did the intervention meet their expectations/solve their problem? How do they feel as a result?
  • An illustration of the customer journey - case study of individuals' experience to bring the service to life for those not involved, identify any breaks or blockages in effective service provision or provide a good news story.