• Touching_Grass@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    Ever-So-Ethical

    Anyone else pick up on this bullshit strategy. It took off during covid. Like they fake what a thing stood for then tear it down even though it wasn’t an attribute.

    Something like

    The infallible Dr. Ghatgud just got something wrong

    I think its shitty

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    This is the best summary I could come up with:


    And let’s zoom in on that number one core value, “AGI Focus,” because it’s a perfect example of how the company’s favorite terms can feel like works in progress.

    In February, OpenAI’s inscrutable CEO Sam Altman wrote in a company blog post that AGI can broadly be defined as “systems that are generally smarter than humans,” but in a wide-ranging New York Magazine interview published last month, he’d downgraded the definition to AI that could serve as the “equivalent of a median human that you could hire as a co-worker.”

    Does OpenAI and its CEO think that AGI, its purported new core value, will be comprised of superhuman artificial intelligence, or is it an AI that’s just about as smart as the average person?

    Founded in 2015 by Altman, Elon Musk, and a handful of others who are by and large no longer affiliated, OpenAI was created as a nonprofit research lab that was meant, essentially, to build good AI to counter the bad.

    Though the firm still pays lip service to that original goal, its drift away from nonprofit AI do-gooders to a for-profit endeavor led to Musk’s exit in 2019, and that purpose-shifting appears to have bled into its self-descriptions as well.

    “We are committed to building safe, beneficial AGI that will have a massive positive impact on humanity’s future,” the OpenAI job postings page now explains.


    The original article contains 440 words, the summary contains 227 words. Saved 48%. I’m a bot and I’m open source!