

Exactly this, and rightly so. The school’s administration has a moral and legal obligation to do what it can for the safety of its students, and allowing this to continue unchecked violates both of those obligations.
Exactly this, and rightly so. The school’s administration has a moral and legal obligation to do what it can for the safety of its students, and allowing this to continue unchecked violates both of those obligations.
This. Satire would be writing the article in the voice of the most vapid executive saying they need to abandon fundamentals and turn exclusively to AI.
However, that would be indistinguishable from our current reality, which would make it poor satire.
What part of “we paid these guys and they said we’re fine” do you not? Why would they choose and pay and release the results from a company they didn’t trust to clear them?
I’m not saying it’s rotten, but the fact that the third party was unilaterally chosen by and paid for LMG makes all the results pretty questionable.
It’s broken now? I’d say that’s a bold assumption that it ever worked in the first place.
Edit: to be clear, I mean that it is and always has been an impossible problem. The only reason it ever worked is because some broker company wanted it as a feature, not because anything compelled them to give original artists a cut. And that’s before you consider the question, “but how do you know the NFT was made by the original artist?”
Part of this is a debate on what the definition of intelligence and/or consciousness is, which I am not qualified to discuss. (I say “discuss” instead of “answer” because there is not an agreed upon answer to either of those.)
That said, one of the main purposes of AGI would be able to learn novel subject matter, and to come up with solutions to novel problems. No machine learning tool we have created so far is capable of that, on a fundamental level. They require humans to frame their training data by defining what the success criteria is, or they spit out the statistically likely human-like response based on all of the human-generated content they’ve consumed.
In short, they cannot understand a concept that humans haven’t yet understood, and can only echo solutions that humans have already tried.