Has anyone tried to produce a computer-adaptive (testing) system using a very large number of test items?
e.g. You want to determine the reading level of a person and then direct them to an appropriate level. You may have to test the person using either a range of reading texts at different levels or test and re-test at the same level to confirm their level and then direct them as follows: (a) they go down one or more levels (b) they stay at the same level or (c) they go up one or more levels.
I wonder how that would be done as it would probably involve maybe 500 reading texts, and many Lectora lessons/tests.
Do you think you would be giving a score at the end of each test, and let that score determine where you take the student next? (simple enough). It is easy enough to branch, but I haven’t used quite so many questions in 1 module – if you run into variable storage capacity issues, instead of storing the full answer to a question, just make the answers a number or letter, hide just the text boxes and create your own text boxes for the answers/distractors.
Any chance you would have the ability to break this up into multiple courses (only if you find you need to), and just link to the next course at the end of each test (depending on where you are hosting this, user doesn’t have to know there are different tests).
You could certainly do it using the LMS for branching, if your LMS supports that. If you can change the student’s course path based on scores, the whole exercise becomes straightforward (but still a lot of work, of course).