every time the process test results action is triggered it should send the interaction data to the LMS and so any failed questions should get recorded.
Or are they repeating the question until they get it right and so by the time the data is sent all questions are correct?
I am guessing the latter. If moving to the first approach isn’t an option then i would suggest adding a variable to track question attempts. I do this for a number of modules that require 100%. Each time a question is processed (i.e. when Submit is clicked) set up a variable to add the question number to a variable (e.g. ‘AttemptsTracker’). I’d use a symbol too so that it can be used to separate the variable back out at a later time (and so that Lectora doesn’t perform a calculation).
Within the Suspend_Data variable you would end up with a string looking something like 1-1-1-2-3-4-5-5-6-7-8. This tells us that they needed 3 attempts to pass Q1, 1 attempt to pass Q2 etc. You can use something like excel to split the string using the symbol “-” as the separator or use a formula to count each time “1-” appears or “2-” appears. If you need to know exactly what they answered for each question you’d need to have more complex actions (to add e.g. 1a or 1b etc). keep it short though as there is a limit to how much data suspend_data can hold.
You would obviously need to be able to access the suspend_data field from your LMS reports to get this information.
It’s a bit of a clunky solution but it does work for us when we need this information.