• Curtis "Ovid" Poe (he/him)@fosstodon.org
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    @froztbyte Yeah, having in-depth discussions are hard with Mastodon. I keep wanting to write a long post about this topic. For me, the big issues are environmental, bias, and ethics.

    Transparency is different. I see it in two categories: how it made its decisions and where it got its data. Both are hard problems and I don’t want to deny them. I just like to push back on the idea that AI is not providing value. 😃

    • Curtis "Ovid" Poe (he/him)@fosstodon.org
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      @froztbyte For environmental costs, MatMulFree LLMs look like they can reduce energy costs 50x. [1] They’ve recently gotten funding for building a larger model. This will be a huge win.

      For bias, I’m worried about the WEIRD problem of normalizing Western values and pushing towards a monoculture.

      For ethics, it’s an absolute nightmare. If your corpus includes Mein Kampf, for example, how do the LLM know what is a lie and what is not?

      Many hurdles here.

      1. https://arxiv.org/abs/2406.02528
      • Curtis "Ovid" Poe (he/him)@fosstodon.org
        link
        fedilink
        arrow-up
        0
        arrow-down
        1
        ·
        3 months ago

        @froztbyte As for the issue of transparency, it’s ridiculously hard in real life. For example, for my website, I used a format I created called “blogdown”, which is Markdown combined with a template language to make it easy to write articles. I never cited my sources, nor do I think I could. From decades of programming, how can I cite everything I’ve ever learned from?

        As for how AI is transparent for arriving at decisions, this falls into a separate category and requires different thinking.