• fidodo@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    5 months ago

    These aren’t simulations that are estimating results, they’re language models that are extrapolating off a ton of human knowledge embedded as artifacts into text. It’s not necessarily going to pick the best long term solution.

    • intensely_human@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      5 months ago

      Language models can extrapolate but they can also reason (by extrapolating human reasoning).

      • fidodo@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 months ago

        I want to be careful about how the word reasoning is used because when it comes to AI there’s a lot of nuance. LLMs can recall text that has reasoning in it as an artifact of human knowledge stored into that text. It’s a subtle but important distinction that’s important for how we deploy LLMs.