Got a feature in mind? Share it with us!
- It is hard to tell what is right and what is wrong because it turns green when you press enter and replaces the response with the input definition. A quick message pops up if something is wrong but it disappears in 2-3 seconds (see screenshot, that little pink bar is in the top right corner). If a student is looking away even for a second, it is unclear if it's right or wrong because the pink feedback bar is gone. The student's actual response disappears from their screen so they can't call a teacher over to ask for clarification; a student would have to remember exactly what they typed to know why it was marked wrong. I could not always discern if I was getting it right without actively looking in the corner but also at my definition in the center when I pressed enter. For students who take more than 3 seconds to read 5 words, this may be challenging for them (especially if they see friendly green words on their screen). From a teacher perspective, if students are doing this while I am circulating, I don't know how I would know who is successful and who is struggling until I go back to the report after class. I can see the answers that are overridden later if I choose to review each student's study set on their own, but that is more tedious than grading a Google Form as we do for Friday's quizzes. Additionally, if we are doing this on a Thursday when review is most crucial, it is too late for me to circle back and tell them why they got it wrong. I would sooner give them a second Google Form or a "trade and grade" the Thursday before a quiz as we have been doing because I can control/address both of those quickly in the classroom. - I do not feel that 6th graders have the judgement to override their own responses, especially without being able to closely read what they just submitted to the AI like that can on a GF, GD, or paper. If a student clicks “explanation” so that they can decide whether to override on their own, it is INCREDIBLY wordy for a 6th grader. I think it would be hard for them to discern what is helpful feedback and what is robot regurgitation (see second screenshot). My thinking here is that we had to simplify 6th grade definitions that included words like "society", "regardless", or "devoted" in recent weeks. I do not see our students being set up for success with explanations like these that Omnisets produces that include "oneself," "modest," or "unassuming." - It wants a very word for word answer. Example- weary: input: feeling or showing tiredness. The AI marked the following responses wrong: • having or showing tiredness • showing or having tiredness • showing or feeling tiredness • to have or feel tiredness • when you feel or show that you are tired • to be or show that you're tired My human grading eyes would mark all of these responses right. I consider myself someone who knows what it means to be weary, but even I could not think of the precise verbs and order I input into Omnisets and have cold called students on for a month now after several tries. I can only imagine what this will look like for content star cards where they have to answer in multiple sentences like to explain a chart, graph, or history graphic. The AI is stumbling with straightforward six word or less definitions. This is going to lead to incessant "what if/about" questions, especially if students cannot tell me word for word what they typed 60 seconds before.