@froztbyte For environmental costs, MatMulFree LLMs look like they can reduce energy costs 50x. [1] They’ve recently gotten funding for building a larger model. This will be a huge win.
For bias, I’m worried about the WEIRD problem of normalizing Western values and pushing towards a monoculture.
For ethics, it’s an absolute nightmare. If your corpus includes Mein Kampf, for example, how do the LLM know what is a lie and what is not?
@froztbyte As for the issue of transparency, it’s ridiculously hard in real life. For example, for my website, I used a format I created called “blogdown”, which is Markdown combined with a template language to make it easy to write articles. I never cited my sources, nor do I think I could. From decades of programming, how can I cite everything I’ve ever learned from?
As for how AI is transparent for arriving at decisions, this falls into a separate category and requires different thinking.
@froztbyte For environmental costs, MatMulFree LLMs look like they can reduce energy costs 50x. [1] They’ve recently gotten funding for building a larger model. This will be a huge win.
For bias, I’m worried about the WEIRD problem of normalizing Western values and pushing towards a monoculture.
For ethics, it’s an absolute nightmare. If your corpus includes Mein Kampf, for example, how do the LLM know what is a lie and what is not?
Many hurdles here.
@froztbyte As for the issue of transparency, it’s ridiculously hard in real life. For example, for my website, I used a format I created called “blogdown”, which is Markdown combined with a template language to make it easy to write articles. I never cited my sources, nor do I think I could. From decades of programming, how can I cite everything I’ve ever learned from?
As for how AI is transparent for arriving at decisions, this falls into a separate category and requires different thinking.