Off-and-on trying out an account over at @tal@oleo.cafe due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 78 Posts
  • 3.49K Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle

  • I once wrote code for an elderly researcher who would only review code as a hard copy. I’d bring him stacks of paper and he’d get going with his pen and highlighter. And I’ll grant that the resolution is normally higher on paper than on most displays. I’m viewing this on a laptop screen that’s about 200 ppi. A laser printer is probably printing at a minimum of 300 dpi, maybe 600 or 1200 dpi.

    I still think that the few people reading things in print are the exception that proves the rule, though.



  • Times New Roman was designed for the print era, and Calibri for onscreen viewing. Onscreen viewing is a lot more common today. Based on that technical characteristic, I’d be kind of inclined to favor Calibri or at least some screen-oriented font.

    That being said, screens are also higher-resolution than they were in the past, so the rationale might be less-significant than it once was.

    https://en.wikipedia.org/wiki/Calibri

    Calibri (/kəˈliːbri/) is a digital sans-serif typeface family in the humanist or modern style. It was designed by Lucas de Groot in 2002–2004 and released to the general public in 2006, with Windows Vista.[3] In Microsoft Office 2007, it replaced Times New Roman as the default font in Word and replaced Arial as the default font in PowerPoint, Excel, and Outlook. In Windows 7, it replaced Arial as the default font in WordPad. De Groot described its subtly rounded design as having “a warm and soft character”.[3] In January 2024, the font was replaced by Microsoft’s new bespoke font, Aptos, as the new default Microsoft Office font, after 17 years.[4][5]

    I suspect that the Office shift is probably a large factor in moving to Calibri.

    That being said, there are many Times New Roman implementations, but it sounds like Calibri is owned by Microsoft, so I’d be kind of inclined to favor something open.









  • I wonder how much exact duplication each process has?

    https://www.kernel.org/doc/html/latest/admin-guide/mm/ksm.html

    Kernel Samepage Merging

    KSM is a memory-saving de-duplication feature, enabled by CONFIG_KSM=y, added to the Linux kernel in 2.6.32. See mm/ksm.c for its implementation, and http://lwn.net/Articles/306704/ and https://lwn.net/Articles/330589/

    KSM was originally developed for use with KVM (where it was known as Kernel Shared Memory), to fit more virtual machines into physical memory, by sharing the data common between them. But it can be useful to any application which generates many instances of the same data.

    The KSM daemon ksmd periodically scans those areas of user memory which have been registered with it, looking for pages of identical content which can be replaced by a single write-protected page (which is automatically copied if a process later wants to update its content). The amount of pages that KSM daemon scans in a single pass and the time between the passes are configured using sysfs interface

    KSM only operates on those areas of address space which an application has advised to be likely candidates for merging, by using the madvise(2) system call:

    int madvise(addr, length, MADV_MERGEABLE)
    

    One imagines that one could maybe make a library interposer to induce use of that.





  • I’ve also noticed that is you want a chest smaller than DDD, it’s almost impossible with some models — unless you specify that they are a gymnast.

    That’s also another point of present generative AI image weakness — humans have an intuitive understanding of relative terms and can iterate on them.

    So, it’s pretty easy for me to point at an image and ask a human artist to “make the character’s breasts larger” or “make the character’s breasts smaller”. A human artist can look at an image, form a mental model of the image, and produce a new image in their head relative to the existing one by using my relative terms “larger” and “smaller”. They can then go create that new image. Humans, with their sophisticated mental model of the world, are good at that.

    But we haven’t trained an understanding of relative relationships into diffusion models today, and doing so would probably require a more sophisticated — maybe vastly more sophisticated — type of AI. “Larger” and “smaller” aren’t really usable as things stand today. Because breast size is something that people often want to muck with, people have trained models on a static list of danbooru tags for breast sizes, and models trained on those can use them as inputs, but even then, it’s a relatively-limited capability. And for most other properties of a character or thing, even that’s not available.

    For models which support it, prompt term weighting can sometimes provide a very limited analog to this. Instead of saying “make the image less scary”, maybe I “decrease the weight of the token ‘scary’ by 0.1”. But that doesn’t work with all relationships, and the outcome isn’t always fantastic even then.


  • There are also things that present-day generative AI is not very good at in existing fields, and I’m not sure how easy it will be to address some of those. So, take the furry artist. It looks like she made a single digitally-painted portrait of a tiger in a suit, a character that she invented. That’s something that probably isn’t all that hard to do with present-day generative AI. But try using existing generative AI to create several different views of the same invented character, presented consistently, and that’s a weak point. That may require very deep and difficult changes on the technology front to try to address.

    I don’t feel that a lot of this has been hashed out, partly because a lot of people, even in the fields, don’t have a great handle on what the weaknesses are and what might be viably remedied and how on the AI front. Would be interesting to try to do some competitions in various areas, see what a competent person in the field and someone competent in using generative AI could do. It’ll probably change over time, and techniques will evolve.

    There are areas where generative AI for images has both surpassed what I expected and underperformed. I was pretty impressed with its ability to capture the elements of what creates a “mood”, say, and make an image sad or cheerful. I was very surprised at how effective current image generation models were, given their limited understanding of the world, at creating things “made out of ice”. But I was surprised at how hard it was to get any generative AI model I’ve tried to generate drawings containing crosshatching, which is something that plenty of human artists do just fine. Is it easy to address that? Maybe. I think I could give some pretty reasonable explanations as to why consistent characters are hard, but I don’t really feel like I could offer a convincing argument about why crosshatching is, don’t really understand why models do poorly with it, and thus, I’ve no idea how hard it might be to remedy that.

    Some fantastic images are really easy to create with generative image AI. Some are surprisingly difficult. To name two things that I recall !imageai@sh.itjust.works regulars have run into over the past couple years, trying to create colored car treads (it looks like black treads are closely associated with the “tire” token) and trying to create centaurs (generative AI models want to do horses or people, not hybrids). The weaknesses may be easy to remedy or hard, but they won’t be the same weaknesses that humans have; these are things that are easy for a human. Ditto for strengths — it’s relatively-easy for generative AI to create extremely-detailed images (“maximalist” was a popular token that I recall seeing in many early prompts) or to replicate images of natural media that are very difficult or time-consuming to work in in the real world, and those are areas that aren’t easy for human artists.



  • Hey, at least he got what Trump didn’t. He had sex with his daughters.

    I think that you might be thinking of Lot rather than Noah.

    Genesis 19:30–38:

    Lot and his two daughters left Zoar and settled in the mountains, for he was afraid to stay in Zoar. He and his two daughters lived in a cave. One day the older daughter said to the younger, “Our father is old, and there is no man around here to give us children—as is the custom all over the earth. Let’s get our father to drink wine and then sleep with him and preserve our family line through our father.”

    That night they got their father to drink wine, and the older daughter went in and slept with him. He was not aware of it when she lay down or when she got up.

    The next day the older daughter said to the younger, “Last night I slept with my father. Let’s get him to drink wine again tonight, and you go in and sleep with him so we can preserve our family line through our father.” So they got their father to drink wine that night also, and the younger daughter went in and slept with him. Again he was not aware of it when she lay down or when she got up.

    So both of Lot’s daughters became pregnant by their father. The older daughter had a son, and she named him Moab[a]; he is the father of the Moabites of today. The younger daughter also had a son, and she named him Ben-Ammi[b]; he is the father of the Ammonites[c] of today.

    It looks like Noah had three sons, but if he had daughters, the Bible doesn’t say anything about them, much less any incest with them.


  • https://en.wikipedia.org/wiki/Genesis_flood_narrative

    Scholars believe that the flood myth originated in Mesopotamia during the Old Babylonian Period (c. 1880–1595 BCE) and reached Syro-Palestine in the latter half of the 2nd millennium BCE.[20] Extant texts show three distinct versions, the Sumerian Epic of Ziusudra, (the oldest, found in very fragmentary form on a single tablet dating from about 1600 BCE, although the story itself is older), and as episodes in two Akkadian language epics, the Atrahasis and the Epic of Gilgamesh.[21] The name of the hero, according to the version concerned, was Ziusudra, Atrahasis, or Utnapishtim, all of which are variations of each other, and it is just possible that an abbreviation of Utnapishtim/Utna’ishtim as “na’ish” was pronounced “Noah” in Palestine.[22]

    Numerous and often detailed parallels make clear that the Genesis flood narrative is dependent on the Mesopotamian epics, and particularly on Gilgamesh, which is thought to date from c. 1300–1000 BCE.[23]

    https://en.wikipedia.org/wiki/Croeseid

    The Croeseid, anciently Kroiseioi stateres, was a type of coin, either in gold or silver, which was minted in Sardis by the king of Lydia Croesus (561–546 BC) from around 550 BC. Croesus is credited with issuing the first true gold coins with a standardised purity for general circulation,[1] and the world’s first bimetallic monetary system.[1]

    I don’t think that they would have had gold coins then.