"If an AI can’t reliably help teach reading (not even do it by itself, just help) it’s unclear what Large Language Models are actually good for in education."
Unclear to YOU maybe. What a tremendously uninformed statement. Teaching someone to read is a TERRIBLE task for a large language model, and this should be obvious to anyone who has a basic understanding of what an LLM is.
Here is a much better prompt:
"What are some of the best published resources, plans, books, or curriculum that parents can use to teach their children to read at home?"
Sonnet will responded straight away with some of the best reading approaches. #1 on the list generated for me is a book called "Teach Your Child to Read in 100 Easy Lessons" which is a book I've personally used to teach six children to read from scratch. It's fantastic.
"Now, I’ve tried this with different prompts. I’ve asked it to list out all the letter sounds with examples of each sound (which it can do) and then create sentences. Nothing worked for me."
What this author SHOULD have done, is ask Claude, "What are the ways words are written such that the sounds that the words make are explicitly denoted, and not dynamically determined based on context?" Then the author would have learned about "phonetic transcription." This would have lead down a path towards success that would have probably involved writing and running basic scripting code to ensure the words being used are correct. Unfortunately the author believes they are interacting with "AI" not a LLM. And so they aren't trying to learn themselves during the interaction, but rather they expect to remain at their current level of knowledge and offload all understanding onto the tool.
"Second, clearly the model understood the request in the sense of being able to repeat it back. It also understood what it did wrong. A human in that epistemic state would not make the same mistake twice, using “fish” immediately after “splash” was acknowledged as incorrect."
No, no, and no. The model understands NOTHING. NOTHING AT ALL. It's a &#$*%ing machine that is predicting the next token probability, and making a selection based on that probability.
ChatBot tools ABSOLUTELY CAN be used in an educational setting, even with children. Just as long as you don't mischaracterize the tools as "AI" and anthropomorphize the tech into a semi-sentient being. Be honest with the children. It's easy:
"This is a tool. It may seem like you are talking to another person, but you are not. The tool is just trying to guess what words should come next. It is very good at guessing. It is very fast at guessing. Sometimes what it says to you makes sense, or is correct. Sometimes what it says to you does not make sense, or is wrong. Be careful what you ask it, and always remember that the text might be wrong. Usually you'll need to make sure it is correct by asking someone else who has personal, first-hand experience with the topic."
This comment was edited on Jul 4, 2024, 22:18.