Bud Mark | Solution Detail

This article provides some detail on the Bud Mark solution, to provide context and information for providers whom use Bud Mark. 

AI Evaluation Engine

The AI engine does most of the heavy lifting, taking a Learners submission, evaluating it against a written Rubric and provides feedback in a prescribed format.

The AI Engine is a trained, prompted Azure OpenAI model. The model was selected for its performance, instruction-following capabilities, Information Security and it's ability to perform well with academic and analytical tasks. 

Inputs

When running the evaluation engine, certain information is sent to the model: 

Rubric

The input into the Evaluation Engine from the Rubric consists of several parts:

  • The title - gives the model a label or heading that is used in the feedback.
  • The Rubric context - gives the model background on the task, to help it understand and ground the feedback in the same context.
  • The Rubric Detail - the most important part, which consists of Criterions, performance levels and descriptors. These three things allow the AI to categorise and ‘score’ a submission against the Rubric.

Learner Submission

The submission is the thing the model will evaluate, against the provided Rubric. There are some restrictions on what a submission can look like, and what the model can use.

  • The file must be less than 500 megabyte. 
  • The file must be of types [.docx] or [.pdf].
  • Only the text can be used within the evaluation. Images, videos, and any non-text content will not be utilised.
  • The documents will be parsed through, where any headers or footers get stripped out, so only the submission content is used in the evaluation. This could impact the word count conditions below.  
  • A submission length must be more than 100 and less than 8,000 words. It is worth noting that any content within the file will be included in this word count, even if it is the activity instructions repeated, or heading pages.
  • Submissions should be written in English, although the model can translate and still provide feedback in other languages, the model is not trained, fine tuned or tested against other languages and so is not supported as standard.

Whilst Bud Mark is aimed at UK training providers, and the recommendation is for submissions to be in English, the engine can understand other languages. Feedback will always be presented in English.

Outputs

The outputs from the evaluation engine are a relevancy score, and the feedback.  

Relevancy Checks

As part of the evaluation, a relevancy score is provided. The model compares the submission content against the rubric details and provides a score on how relevant the submission is to the Rubric. The score is on a one to ten scale, if it scores a five or below, the model will not evaluate the submission and will provide  the following feedback

“The submission did not meet the relevancy score threshold. Your score is [number]”

This protects the model from trying to evaluate a submission against something entirely unrelated, which may have been submitted in error.

Feedback

The feedback output comes in the following format:

  • Feedback Summary: a concise introduction to the detail of the feedback.
  • Strengths – two to three bullet points on things that were good.
  • Areas for improvement – two to three bullet points on things that could be better.
  • Performance level feedback per criterion row – for each criterion, you can see which performance level the submission achieved.
  • Test feedback per criterion row – for each criterion, a description of what and how that performance level was achieved.
  • Spelling errors – a list of incorrectly spelt words and what the correct version should be.

Access

Making use of the evaluation can be achieved in two ways.

The first, is within the Bud Mark Management App, whereby a Rubric can be tested to verify the output is appropriate. Any Bud Mark Management App user can test a submission. 

Secondly, it can be accessed via the Engine API, which is integrated to Bud Systems activities. More on that here.

 

Bud Mark Management App

The Bud Mark Management App is where providers create new rubrics and administer existing ones. The app also provides other administrator functions, like inviting users and  dealing with roles.

Common Components

The app aims to have an excellent user experience, accessibility and usability while keeping a clean and modern user interface. You will notice several patterns throughout the app that make it easier to use.

Navigation

Navigating the main pages of Bud Mark is via the vertical navigation menu on the left of the screen. Making it always visible and accessible for easy navigation.

 

Success / error message

These appear as ‘toaster’ notifications at the bottom right of the screen. Success will appear in green with a thumbs up, while errors appear in red with a thumbs down.

 

Call to action Buttons

Call to action (CTA) buttons are a prominent  dark blue colour. When not already editing a record, the key CTA will appear at the top right. When editing records, the CTA will appear inline with what you are editing.

 

Field input validation

Input validations will appear directly below the specific field at fault, in a red text. The text will explain the problem for easy resolution.

 

Bud Mark Rubric Integration

Bud Mark integrates with Bud, our Learner Management System. This allows users of Bud, commonly Trainers, to use Bud Mark to evaluate Learner Submissions within their existing workflows.

Bud Mark in Bud

Enable an Activity for Bud Mark

So that an activity can take advantage of Bud Mark, it must be linked to a Rubric that is published in your Bud mark organisation. Once linked, the activity must be published in Bud, which then propagates out to existing Learners and is available for new Learners.

Just like changing any Activity in Bud, there are rules around when the linking of an Activity to a Rubric will propagate to a specific Learners Activity. The Activity in the Learners Learning Plan must not be Completed, Exempt or Pending Confirmation. Is an activity is already ‘Submitted’, the trainer can use Bud Mark for it.

Using Bud Mark in an Activity

Making the most of Bud Mark in an activity is simple. Once an activity is enabled for Bud Mark,  a ‘Mark with Bud Mark’ button is displayed alongside the normal ‘Mark Submission’ button. This integrates the process into the usual workflow, making it simple and easy for a trainer to initiate.

Editing a Rubric once enabled in Bud

Once a Rubric is linked to an Activity in Bud, changes to that Rubric in Bud Mark will automatically synchronise over. So, any subsequent submissions evaluated by Bud Mark will be done so using the latest version.

If the Rubric is made ‘Archived’ or ‘Draft’ in Bud Mark, then the Bud Activity will need to be updated to reflect this. If this is not completed, then Bud will error when a user tries to use Bud Mark on that activity.

Technical Integration

Bud Mark integrates with Bud by Application Programming Interfaces (APIs). The APIs follow RESTful conventions, and all requests must be made via HTTPS, to ensure data is secure. The APIs are secured and data segmented by the presence of an authentication key and a unique Organisation UID.

The integration involves two APIs:

  • Rubric Finder API - provides access to retrieve a list of Published Rubrics to link to an activity. 
  • Rubric Engine API - provides the mechanism to send submissions for evaluation, and returns feedback to be displayed within the Bud workflow.