Scoring is done when a test is processed and is done in the context of the entire test. Interaction data may provide the scoring you are looking for but that is currently only passed to the LMS and would not be easy to get at, especially on a test page.
Lectora Desktop does not write out answers with the question data so you would have to write the correct answer into your own scoring, or conditional logic. I do not see any comparisons for individual items in our, in fact our scoring for the comparison isCorrect seems to look at only the entire answer being correct. Otherwise, it would need some idea of the number of points or percentage it would be to consider the question correct.
So it seems we need a few things:
1) Access to a score for a question that has “Grade each choice” selected, possibly a reserved variable set after process question.
2) Correct and Incorrect indicators associated with a question that would be displayed per choice after a process question.
Does that sound about right? I will work with @tea and make sure we get this written up. We will also consider if there are some quick things we could do to make these available via scripting (for JS access to the score or to indicate which choices are correct).