• hark@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    3
    ·
    28 days ago

    Reduces human effort in what? Certainly for producing garbage, but it increases my human effort in having to wade through that garbage.

    • Lumidaub@feddit.org
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      28 days ago

      The soul-crushing effort of socialising and producing art, an effort that is eating all that mental and physical energy which would be better utilised in the mines to make more profits for billionaires. /s

      • AngryMob@lemmy.one
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        5
        ·
        27 days ago

        What about people who have artistic thoughts but have trouble getting them out of their head? I would argue that is most people because most of us arent artists. We also arent going to pay a commission for every idea we have. A simple image generator can be valid for that.

        Also you are ignoring those who may even refine their prompt generated images (which are usually what people see as ai slop) into something better using all the new tools and techniques available now (inpainting, controlnets, regional guidance, etc). I dont think that is any less of an artistic process or artistic outlet than doing it with photoshop or with physical media.

        • Lumidaub@feddit.org
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          2
          ·
          27 days ago

          I am VERY convinced you have heard all the counterarguments to these, several times, and you do not need me to reiterate them.

    • Donkter@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      3
      ·
      27 days ago

      It reduces effort in summarizing reports or paper abstracts that you aren’t sure you need to read. It reduces efforts in outlining formulaic types of writing such as cover letters, work emails etc.

      It reduces effort when brainstorming mundane solutions to things, often by knocking off the most obvious choices but that’s an important step in brainstorming if you’ve ever done it.

      Hell, I’ve never had chat GPT give me the wrong instructions when I ask it for a basic cooking recipe, and it also cuts out all of the preamble.

      If you haven’t found uses for them, you either aren’t trying too hard or you’re simply not in an industry/job that can use them for what they are useful for. Both of which are ok, but it’s silly to think your experience of not using them means that no one can use them for anything useful.

      • hark@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        26 days ago

        Creating a lot of filler “content” is also another use for them, which is what I was getting at. While I have seen some uses for AI, it overwhelmingly seems to be used to create more work than reduce it. Endless spam was bad enough, but now that there’s an easy way to generate mass amounts of convincingly unique text, it’s a lot more to wade through. Google search, for example, used to be a lot more useful, and results that were wastes of time were easier to spot. That summaries can include inaccuracies or outright “hallucinations” makes it mostly worthless to me since I’d have to at the very least skim the original material to verify just in case anyway.

        I’ve seen AI in action in my industry (software development). I’ve seen it do the equivalent of slapping together code pieced together from Stack Overflow. It’s impressive that it can do that, but what’s less impressive are clueless developers trusting the code as-is with minimal verification/tweaks (just because it runs, doesn’t mean it’s correct or anywhere close to optimal) or the even more clueless executives who think this means they can replace developers with AI or that tasks are a simple matter of “ask the AI to do it”.

      • Squirrelanna@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        27 days ago

        Just because you haven’t personally gotten an egregiously wrong answer doesn’t meant it won’t give one, which means you have to check anyway. Google’s AI famously recommended adding glue to your pizza to make the cheese more stringy. Just a couple of weeks ago I got blatantly wrong information about quitting SSRIs with its source links directly contradicting it’s confidently stated conclusion. I had to spend EXTRA time researching just to make sure I wasn’t being gaslit.

        • Donkter@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          27 days ago

          Google’s AI is famously shitty. ChatGPT, and especially the most modern version is very good.

          Also don’t use LLMs for sensitive stuff like quitting SSRIs yet.

          • Squirrelanna@lemmynsfw.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            24 days ago

            That’s the thing. I didn’t want to use it. The AI’s input was entirely unsolicited and luckily I knew better than to trust it obviously. I doubt the average user is going to care enough to get a second opinion.

      • AngryMob@lemmy.one
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        27 days ago

        To add on to your comment. Even beyond job/industry, its like your cooking example. I spin up an llm locally at home for random tasks. An llm can be your personal fitness coach, help you with budgeting, improve your emails, summarize news articles, help with creative writing, christmas shopping list ideas, brainstorm plants for your new garden, etc etc. they can fit into so many simple roles that you sporadically need.

        Its just so easy to fall into the trap of hating them because of the bullshit surrounding them.

        • Hexarei@programming.dev
          link
          fedilink
          English
          arrow-up
          3
          ·
          27 days ago

          Yeah as long as you double check their work and don’t assume their facts are accurate they’re pretty useful in a lot of ways.

    • Hexarei@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      27 days ago

      I’ve found it to be pretty good at transforming and/or extracting data from human input. For example, I’ve got an app that handles incoming jobs, and among the sources of those jobs is “customer sent an email”. Pretty neat to give an LLM a JSON schema and tell it to fill the details it can figure out from the email. Of course, we disclose to the user that the details were filled in by AI and should be double checked for accuracy - But it saves our customers a lot of time having the details sussed out from emails that don’t follow a specific format.