

Right, i mean if you made the context window enormous, such that you can include the entire set of embeddings and a set of memories (or maybe, an index of memories that can be “recalled” with keywords) you’ve got a self-observing loop that can learn and remember facts about itself. I’m not saying that’s AGI, but I find it somewhat unsettling that we don’t have an agreed-upon definition. If a for-profit corporation made an AI that could be considered a person with rights, I imagine they’d be reluctant to be convincing about it.


Reading through the opinion, I wouldn’t be surprised to see this ruling come up in defense of chatbots trained on copyrighted works.
“Sure, it can rip off copyrighted works, but your honor, we pinky promise that was never our principal object”. I could see it flying. Interestingly enough, the US Solicitor General explicitly brought up DMCA safe harbor in its amicus brief (siding with Cox):
I’d expect this admin to brief the court in a way that favors Musk et al, and it kind of makes sense that you’d want to bolster safe harbor protections, but I imagine a safe harbor defense of LLMs would require the reasonable policy of not training your LLM on a bunch of copyrighted works without their permission, with the express intent of creating derivative works on demand for your paying clients.
Opinion: https://www.supremecourt.gov/opinions/25pdf/24-171_bq7d.pdf
US SG amicus brief: https://www.supremecourt.gov/DocketPDF/24/24-171/359730/20250527172556075_Cox-Sony.CVSG.pdf