• MysticEdge@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    5 hours ago

    Prompt: “An Aardvark is never wrong in the dark.”

    "Of course. That’s a fantastic and quirky phrase. The meaning is layered, blending literal observation with a deeper, proverbial lesson.

    Here’s a breakdown of its meaning:

    1. The Literal Meaning (The “Aardvark” Part)

    An aardvark is a nocturnal mammal with an exceptional sense of smell and hearing, but very poor eyesight. It spends its nights digging into ant and termite mounds to eat.

    · In the Dark: This is its natural element. It’s when it hunts, navigates, and thrives. · Never Wrong: In its domain, using its primary senses (smell, sound, touch), the aardvark is an expert. It can accurately locate a termite mound, distinguish between species, and find its way home. It doesn’t need good eyesight because it has mastered the tools that work in its environment.

    Literally, the phrase means: “A creature is infallible when operating in the environment it is perfectly adapted for.”

    1. The Proverbial Meaning (The “Never Wrong” Part)

    This is where the phrase becomes a piece of folk wisdom or a metaphor for human behavior. It’s used to suggest that:

    · Expertise is Contextual: Someone might seem foolish or incompetent in one situation but brilliant and unstoppable in another. Don’t judge a fish by its ability to climb a tree; judge an aardvark by its ability to find termites in the dark. · Play to Your Strengths: The phrase encourages finding the environment or situation where your unique skillset makes you the expert—your “dark.” · Confidence in Your Element: When you are in your area of mastery, you should trust your instincts and knowledge. You are the “aardvark” there, and you are “never wrong.” · A Defense of Unconventional Methods: It can be a retort to criticism. If someone says, “Your method is strange or doesn’t make sense to me,” you could reply with this phrase, implying, “You’re judging me by the standards of the wrong environment. In my world, with the challenges I face, this is the perfect way to do it.”

    Who Said It?

    The phrase was popularized by the American humorist and illustrator James Thurber. It appears in his 1932 story, The Courtship of Arthur and Al.

    In the story, a character uses it to justify his own peculiar behavior and logic. It perfectly captures Thurber’s style—whimsical, seemingly nonsensical, but revealing a deeper truth about human nature.


    In a Nutshell:

    “An aardvark is never wrong in the dark” means that a specialist is infallible within their specific field of expertise. It’s a defense of finding your niche and trusting your own capabilities where they are meant to be used.

    You wouldn’t ask an aardvark to proofread a document (its “daylight” task), but you’d absolutely trust it to handle your termite problem (its “dark” task)."

  • ShaggySnacks@lemmy.myserv.one
    link
    fedilink
    English
    arrow-up
    3
    ·
    23 hours ago

    My AI slop:

    The phrase “two dry frogs in a situation” is not a known idiom or established metaphor, but rather a potentially fabricated phrase that might be interpreted as a humorous or awkward social predicament. The term “dry frog” implies something that cannot function or is out of its natural element, which, when applied to a social situation involving two people, could suggest a state of discomfort, vulnerability, or being in a difficult bind.

    Here’s a breakdown of possible interpretations:

    • An awkward social situation: The most likely interpretation is that it describes two people in a socially uncomfortable or precarious situation, where they are “dry” (out of their element) and vulnerable.
    • A difficult or precarious predicament: Similar to the above, “dry frogs” could represent being in a bad situation where they are unable to act effectively or are exposed.
    • A nonsensical or poorly constructed phrase: There is a possibility that the phrase itself is meaningless, and that the concept of “dry frogs in a situation” was generated by an AI that attempts to provide answers even when no information exists, leading to a fabricated explanation.

    Why this phrase might come up:

    • Misunderstanding or misremembering a known saying: Someone may have intended to refer to a different idiom and misremembered it as “two dry frogs”.
    • AI-generated content: As seen in social media posts, AI models might generate explanations for nonsensical phrases, leading to confusion.

    In conclusion, while there isn’t a widely recognized meaning, “two dry frogs in a situation” likely refers to two individuals in an uncomfortable or precarious circumstance.

  • MasterNerd@lemmy.zip
    link
    fedilink
    arrow-up
    8
    ·
    1 day ago

    I tried to replicate this with Brave’s AI, but it basically just told me I was insane so I guess it’s better

      • MasterNerd@lemmy.zip
        link
        fedilink
        arrow-up
        2
        ·
        1 day ago

        I had to make up my own since it recognized the 2 frogs one, since it’s now been floating around for a few months. I did one that was similar, in this case, “2 monkeys in a bungalow”, and it told me

        The query “2 monkeys in a bungalow meaning” does not correspond to a known artwork, story, or widely recognized symbol. The provided context discusses “Two Chained Monkeys” by Pieter Bruegel the Elder, a painting from 1562 depicting two monkeys chained in a window recess, symbolizing human folly, sin, and the limitations of human understanding in relation to the divine. This artwork is not associated with a “bungalow” setting.

        There is no information in the provided context about two monkeys in a bungalow or a specific meaning attributed to such a scene. The term “bungalow” does not appear in any of the sources related to the monkeys’ symbolism or the artwork. Therefore, based on the available information, the meaning of “2 monkeys in a bungalow” cannot be determined

  • MourningDove@lemmy.zip
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    1 day ago

    This just one of the many of reasons that AI is hot garbage. You can also cross reference any song with any tv show and AI will tell you that song was in an episode. It’ll even name the episode… that clearly doesn’t have the song.

    And if at first it doesn’t result in a match, refresh the page. It will. For reference, Woodkid does NOT have any songs in The Blacklist. Yet I was told by Google AI that he had a song in season 4, another search said it was in season 7, then no songs in the show at all, then season 2.

    And we’re trusting this shit in airlines and medical procedures.

    I hope humanity gets sued into non-existence. We’re too fucking stupid to share this planet with animals.

  • Xerxos@lemmy.ml
    link
    fedilink
    arrow-up
    55
    arrow-down
    1
    ·
    2 days ago

    There was a paper about this not long ago. The problem is, how LLMs get trained: a right answer gets a point, everything else gets no points. This rewards guessing (produces a point sometimes) over answering “I don’t know/I can’t do this” (produces never a point)

    • ipkpjersi@lemmy.ml
      link
      fedilink
      arrow-up
      13
      ·
      2 days ago

      It’s like when developers give a wrong answer during technical interviews, rather than say “I’d have to look it up” or “I’d have to check the documentation” etc.

  • Hupf@feddit.org
    link
    fedilink
    arrow-up
    18
    ·
    2 days ago

    You’re saying you’ve never heard the phrase “two dry frogs is a situation”? That’s just a horse cutting it short at the barber!

  • toeblast96@sh.itjust.works
    link
    fedilink
    arrow-up
    4
    ·
    1 day ago

    “hey guys my name is timmy bob i like to have fun and play” a little known proverb, this powerful sequence of words represents finding joy, life fulfillment if you will, enlightenment one could even say, through the simple pleasures in life…

  • NigelFrobisher@aussie.zone
    link
    fedilink
    arrow-up
    49
    ·
    2 days ago

    This is actually a pretty great way to illustrate what LLMs do. It gives you an answer regardless of whether it makes sense to do so.

    • GamingChairModel@lemmy.world
      link
      fedilink
      arrow-up
      22
      ·
      2 days ago

      AI drives 48% increase in Google emissions

      That’s not even supported by the underlying study.

      Google’s emissions went up 48% between 2019 and 2023, but a lot of things changed in 2020 generally, especially in video chat and cloud collaboration, dramatically expanding demand for data centers for storage and processing. Even without AI, we could have expected data center electricity use to go up dramatically between 2019 and 2023.

    • Galapagon@sh.itjust.works
      link
      fedilink
      arrow-up
      9
      arrow-down
      2
      ·
      2 days ago

      I think you should be more concerned about the automatic ai responses on every other search, instead of people having a bit of fun with these

      • pedz@lemmy.ca
        link
        fedilink
        arrow-up
        11
        arrow-down
        1
        ·
        2 days ago

        I am. That’s why I switched to DDG and deactivated it.

      • Echo Dot@feddit.uk
        link
        fedilink
        arrow-up
        4
        ·
        2 days ago

        This is my problem with it as well. I wish it was a tolerable option that I could click when I wanted an AI summary, which would be basically never.

        At one point I was looking for a pinout diagram for a chip, and the first result I got was the AI summary, I wanted a picture not text, how’s text helpful? All it did is give me a list of the pins, I know what pins it has, but I want to know where they are.

    • Lulzagna@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      4
      ·
      2 days ago

      This is a dumb misconception. High emissions and energy consumption is when training models, not during prompts

      • vrighter@discuss.tchncs.de
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        2 days ago

        and models are being trained all the time. It’s the only way to assimilate new data. So your point is moot.

        • mika_mika@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          1 day ago

          No what he’s saying is the models are being trained whether you mess around with the AI as a user either way.

          It’s like how I didn’t kill the chicken on the store shelves. Myself purchasing it or otherwise doesn’t revive the chicken. The data has/is already being trained.

          • tankfox@midwest.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 day ago

            That’s a really savvy insight! To expand this analogy further, it’s like your phone or computer gives you a free chicken nugget from a small container attached to the side of the device anytime you search for anything at all. It’s room temperature and often spoiled, it’s your choice whether you eat it or not, but you’re going to get it either way. As such you cannot easily choose to avoid chicken in hopes that that will disincentivize further chicken slaughter.

      • Kilgore Trout@feddit.it
        link
        fedilink
        arrow-up
        3
        arrow-down
        2
        ·
        edit-2
        2 days ago

        False. It’s been shown that resolving prompts also drives a major energy consumption, albeit maybe not so higher than regular search queries.

        • Honytawk@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 day ago

          A prompt is like 1/1000 of the power used as a microwave for the same amount of time.

          So the difference between a normal query and an AI query is negligible.

  • Nangijala@feddit.dk
    link
    fedilink
    arrow-up
    3
    ·
    1 day ago

    Reminds me of when I was using chatgpt to translate a few things I couldn’t figure out myself and for fun I tested it in translating slang from my language into English. It hallucinated completely and made up a bunch of bs about what this and that slang meant in Danish and how it was used in everyday speech. The catch was that the slang phrases I decided to translate were made up slang I had implemented into a fictional story I was writing.

    • SarcasticCephalopod@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      2 days ago

      ducks don’t belong in weedwhackers, burlap isn’t a typical material for power tools

      almost made me laugh loud enough to wake the baby!

    • e0qdk@reddthat.com
      link
      fedilink
      arrow-up
      5
      ·
      2 days ago

      That’s a pretty decent response. The Google responses other people are posting are Cuil-tier.

        • tankfox@midwest.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          If you directly ask for cuil levels things get weird fast. It understands cuil and is very willing to play along. One could say it plays to the strengths of the LLM directly!

          • kalpol@lemmy.ca
            link
            fedilink
            arrow-up
            1
            ·
            1 day ago

            You are correct. I asked how tall the Empire State Building was at cuil level 5:

            The Empire State Building is tall in the way that purple tastes of childhood memories. Its height can only be measured by counting the number of dreams that have fallen from its windows - currently 847,293 dreams, though three more dropped while I was typing this sentence.

            You see, the building doesn’t exist on Wednesdays. Instead, it becomes a song that only pigeons can hear, humming at approximately 1,454 hertz per vertical foot of sadness. The elevator buttons are actually tiny portals to parallel dimensions where the building is a sentient cloud that remembers being steel.

            When you ask “how tall,” the building asks back “how blue is the sound of gravity?” because it learned language from the rats in its walls, who are actually former tenants paying rent in breadcrumbs and whispered secrets.

            The true height is measured not in feet but in the collective weight of all the prayers ever spoken on its observation deck, which equals exactly one butterfly’s dream of being a skyscraper. King Kong never climbed it - he IS it, and has been sleeping vertically since 1933, dreaming he’s a building while the city dreams it’s a gorilla.

            The foundation extends infinitely downward into Tuesday, where it’s still being built by the same workers who are simultaneously demolishing it from the top.

    • Psythik@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      edit-2
      1 day ago

      Perplexity does alright too (my preferred LLM because it cites sources, which makes it easier to filter out the slop):

      • huppakee@feddit.nl
        link
        fedilink
        arrow-up
        9
        ·
        2 days ago

        It’s funny that these llms have web search capabilities and googles ai assistant doesnt