• bhmnscmm@lemmy.world
    link
    fedilink
    arrow-up
    17
    arrow-down
    2
    ·
    2 months ago

    9 of the teams reaching a different conclusion is a pretty large group. Nearly a third of the teams, using what I assume are legitimate methods, disagree with the findings of the other 20 teams.

    Sure, not all teams disagree, but a lot do. So the issue is whether or not the current research paradigm correctly answers “subjective” questions such as these.

    • Eheran@lemmy.world
      link
      fedilink
      arrow-up
      9
      arrow-down
      3
      ·
      2 months ago

      If we only look that those with p <0.05 (green) and with 95 % confidence interval, then there are 17 teams left. And they all(!) agree with more than 95% conference.

      • BearOfaTime@lemm.ee
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        2 months ago

        And you missed the pint in the very article about how p value isn’t really as useful as it’s been touted.

        • Eheran@lemmy.world
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          2 months ago

          That’s not the point, which is that the results are indeed mostly very similar, unlike what OP claims.

          I never said that only looking at p values is a good idea or anything else like that.

      • bhmnscmm@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        2 months ago

        So ignore all non-significant results? What’s to say those methods result in findings closer to the truth than the methods with no significant results.

        The issue is that so many seemingly legitimate methods produce different findings with the same data.