Seriously, 15 times is my limit on correcting an LLM.
The name in question? Rach. Google absolutely cannot pronounce it in any other way than assuming I was referring to Louise Fletcher in the diminutive.
Specifying “long a” did nothing, and now I’m past livid. If you can’t handle a common English name, why would I trust you with anything else?
This is my breaking point with LLMs. They’re fucking idiotic and can’t learn how to pronounce English words auf Englisch.
I hope the VCs also die in a fire.
The comments of this YouTube video are enough to tell me that your expectations are not realistic and that neither human nor machine is going to pronounce this correctly the first time except by coincidence.
Short for Rachel?
Ray-tch. Rachel, but without the “el.”
Raych?
I swear, if I have to start misspelling things for computers to pronounce it correctly …
English is notoriously awful regarding orthography vs pronunciation. I actually thought you meant something that rhymed with Bach just looking at the name with a longer ‘a’ for some reason (which is weird since vowel length isn’t phonemic in English).
Edit: you probably also could have said “hard a” or something since it probably literally thinks ‘long a’ means ‘hold the a sound for a longer duration’ (which makes sense to me)
The Great Vowel Shift in Middle English should be well understood by a computer system given that it happened centuries ago.
“Long a” (in most European languages, this is “e,” and I can’t be fucked to look up the IPA [OK, after realising I should include the link, we’re talking about [aɪ]]) and “short a” (closest I can come up with is “ä” in German, though the throat positions are different [eɪ]) were literally taught to me as the educational term in first grade.
If the training corpus is so poor that what a 6-year-old understands in the '80s is utterly baffling, NLP hasn’t advanced near as much as it should have.
There is a reason why people keep asking “How do you spell it?” when being told a name in English. The counterpart is, “How do you pronounce it?”.
Even with “long a”, I still can’t tell how would you want to pronounce “Rach”. I can come up with 4 different pronounciations right now: “Ra-ah-ch”, “Ra-ah”, “Ra-sh”, “Ra-kh”.
Given that OP says this is a common English name (it’s not), I have to imagine that they’re referring to the common short form of Rachel. Pronounced as just the first syllable.
It’s literally the English version of an Old Testament name. It’s not Aiden or whatever the new hotness is, but it’s not uncommon.
Rachel is a very common given name. “Rach” is a fairly common nickname for it. “Rach” is not a common given name. (This matches what I said above.)
I just took a look at some baby name sites to try and find some statistics. I actually can’t find a single person named “Rach” because all the sites assume I want statistics for the long form, even when I’m on the page for “Rach” and they also have a page for “Rachel.” I’m interpreting this as being given the short form as your name is extremely rare.
I’m not claiming her given name was Rach. In fact, calling her that was rather disastrous (family only), but it was all my brain could come up with after “hon” was actually barely avoided. For my boss. In the middle of the newsroom.
I believe you didn’t intend to, but you did claim it, twice. Hence why the commenter I initially replied to (in which I guessed you meant the common _nick_name) was confused.
Then you replied to me saying “it’s literally from the bible [so it’s a common name]” implying that you disagreed with me about it being a nickname and you did really mean it as a given name.
Hopefully that explains the confusion.
To me, this simply is further evidence that LLMs aren’t ready for primetime, as though this were not already established.
What does any of this have to do with LLMs?
I mean I agree with the conclusion but the confused people here are… people. I think if you ask an LLM about the “common name Rach,” it’ll also tell you that you probably mean Rachel.
I feel like at least with American english people have taken a lot of liberties in how they spell a name and then want it pronounced.
And I first read it as Ra-sh, but also could see it as Ray-sh.
What did you do to “teach it”?
Short for Rachel?
Funny, the UK pronunciation for Rach is the way I would pronounce it, even though I’m Canadian and the Canadian way it gives I’ve never heard anyone say:
https://www.babynamespedia.com/pronounce/Rach
[ˈreɪ.tʃ’] RAY-ch
How do they differ? It’s literally just saying Rachel without the ‘el’.
You can listen on that site.
One is RAYch The other is RAHch
Maybe French Canadian? I’ve never heard that in English in my life
Did you try spelling it with phonetics?
I know IPA (the linguistic term, not the beer … OK, I also know the beer, but that’s not important right now) … and, yeah, I tried that, but on a laptop without a numpad, it’s a bit of a slog.
What was maddening was the LLM got it right somewhere around 10% of the time after I corrected it. This was a voice conversation, so every time I corrected it, that should have been clear data. Aren’t these systems simply supposed to be pattern recognition? How is it outputting wildly different pronunciations (N>5) with constant inputs?
I’m pretty sure whatever voice system you’re using is just transcribing things to text and feeding it into an LLM, so it wouldn’t actually have that audio data. I’m not aware of any audio equivalent of LLMs existing.
The equivalent is NLP (natural language processing), which was already a huge research area in the '90s. In fact, had I not been a fucking idiot and caught the journalism bug, with my studies in CS and linguistics, I’d likely be doing quite well.
This said, that was about voice input being converted to text – e.g., Dragon Naturally Speaking – but apparently little progress has been made going in the other direction. NotebookLM had other weird glitches where standard English words get weird vowels some 5% of the time.
The models themselves are nondeterministic. Also, they tend to include a hidden (or sometimes visible) random seed that gets input into the models as well.
How delightful. I mean, I knew there were reasons you don’t get the same results twice, but I’ve not dived into how all this works, as it seems to be complete bullshit. But it’s nice to hear that’s a feature.
I quikly gave up on correting those bots. Either you’re lucky and made a prompt that induced it to generate a decent answer. Or you’re not, and there’s no point in correcting it. In that case you’re better off doing whatever you were going to do without a LLM.