@Su_G @the_roamer @scottmiller42 – IMHO the way out is that students and other LLM users can see hyperlinks to fact-checked sources to judge themselfes if a genai text is legit. Instead of marketing/ads, I would like to see micropayments to content creators. Good ol “explainable AI” rolled 🌯 into a business model.
1️⃣ Users 🙂
- must be motivated to pay for “good ai system 🤖” and
- compare the true cost of AI with “RTFM and use my own 🧠 to generate answer”
2️⃣ Generative AI LLM 🤖
- must honor copyright on training data
- must micropay author when generating content based on authors IP if author wants that
- must fact check using the same criteria human fact checkers apply (author? who paid for this content? Original content or bias-generated by Marketing/Russians/QAnon…? Does it reference source? Other reliable source for same exists?
- can use knowledge graphs for answer generation at IMHO 0.1% of the cost compared to an LLM
3️⃣ content creators
- needs to add context metadata (i.e. schema.org or any other RDF vocab will do) about
- author (= liable if this is 💩, payable if this is helpful 👍🏻)
- copyright
- links to sources where “facts” were copy pasted from
- machine readable content representation (linked data, ActivityPub, …)
Micropayment for content use was first proposed by Ted Nelson in Xanadu in 1960 and has been refined in the W3C micropayment initiative : “Semantic Web and […] Micropayments provide an alternative to […] advertising as a source of revenue“
As an exercise for the interested reader I invite you to answer the question “how can that Leobard dude reply using hyperlinks and list formatting?” The rabbit hole out of which I communicate to you is exciting since 2019 and IMHO shows the author & markup solutions I outline above in action.