• TropicalDingdong@lemmy.world
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    3
    ·
    edit-2
    1 year ago

    not saying there’s not a climate change disaster happening, but some of these analyses are a little misleading.

    Except that to only say “…since 1979” is to comment in either ignorance or bad faith (your pick). We maintained record breaking temps ALL above the prior record for 36 is the damn point, and to miss that is to miss the entire thing.

    There have been 44 years since 1979. Lets say the probability of getting 1 day above the 1979 record in a given year is 1/44 (uniform). The probability of even getting a week of the hottest days in one year would be (1/44)^7, would be a one in 300 billion chance. There are some issues and some assumptions I’m making for convenience, but its not ok to make idle comments with no comprehension of the scale of extremity this event represents.

    As in, do you have any fucking idea how unlikely that is? This isn’t an ‘oopsie poopsie’ funny record event.

    • bloodfoot@programming.dev
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      1 year ago

      Not to be too pedantic but your back of the envelope probabilities are based on inaccurate assumptions and probably several orders of magnitude off. Specifically, your not just assuming uniform but also independent from one day to the next. A more accurate treatment would be to assume conditional dependence from one day to the next (the Markov property). Once you have a record hot day, you are significantly more likely to have another record hot day following it.

      That said, it’s still low probability, just not as low as what you’re saying.

      • TropicalDingdong@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        1 year ago

        Any thoughts on how I could incorporate that for a better back of the napkin?

        (Also, that number is only consider that the number presented was based on 7 independent events, not 34)

        • bloodfoot@programming.dev
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          If we stick with your 1/44 assumption, we can then assume 50% chance that the following day will also be a record setting day (probably too low still but the math is easier). Your one week estimate would be (1/44)*(1/2)^6.

            • TropicalDingdong@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              1 year ago

              If we stick with your 1/44 assumption, we can then assume 50% chance that the following day will also be a record setting day (probably too low still but the math is easier). Your one week estimate would be (1/44)*(1/2)^6.

              While @bloodfoot@programming.dev is right about me not being right, I’m not sure their implementation is correct either. I grabbed the data sheet from here, and have calculated the following probabilities. Now that I’m invested I really want to get this ‘right’.

              This the work book I’m in.

              I think its most fair to calculate this using the marginal probability of the next day being warmer for a given month. Some months are generally warming, some generally cooling, so it makes sense to base the marginal probability of the next day being warmer for a specific month (rather than all months). The probability of a given day in July being warmer than the previous day is 0.615. So pretty close to 50/50, but better to be precise. I think if we’re going to go this way, we can also give ourselves the liberty of not starting on July first, but rather then trend can start any time in July.

              So now the math becomes: State one is defined as the probability of having a record warm day in July: 1/44 *days in July State two is defined as the probability the next day being warmer than the previous in July : 0.615

              Which when I put it into this calculator, we get… .9981? This seems extremely low. So much so, I think either the markov chain is implemented wrong, or that the markov principal doesn’t apply here.

              This would put the probability of a summer like the one as a one in 500-ish year occurrence. My understanding and read of the news is that this is an event that is on the scale of ‘has never happened before’.

              Update:

              So I am still not confident in the markov implementation but I put together another approach. I did a one sided T-test of this July versus all the previous Julys.

              Here is a plot of that (month is on the X).

              T-statistic came in too low to calculate a p-value.