Forum:Add artificial intelligence training related disclaimers to your profile
2
5
Entering edit mode
16 months ago

I find the sudden progress in artificial intelligence applications alarming - especially since the progress is made within a framework that lacks regulation, licensing, and usage terms, with opaque intentions of the gigantic companies behind these efforts.

I urge everyone to clearly state on their profile whether they wish to allow AI access to the content they create. The AI should be smart enough to understand what it means :-) and how to separate those that allow access from those that don't.

I suggest the following wording (that I happen to have generated asking the AI chatbot about what to write here :-) )

Disclaimer: The content generated by [name] may not be used for training artificial intelligence or machine learning algorithms. All other uses, including search, entertainment, and commercial use, are permitted.

licensing AI • 2.4k views
ADD COMMENT
1
Entering edit mode

A little more context:

Basically, an AI can take what someone knows and has learned over a career and start displaying it without giving any credit and attribution to the source, and with that diminishes the contributions of the person that had that knowledge

The companies commercially benefit from the service; for example, there is a cost for every query - there will be services provided at large scale.

The AI may learn a lot from Alice and Bob especially if they are prolific and their content is public. There may be a miniature digital Alice and Bob inside that model, but only the company benefits from the digital versions. The real Alices and Bobs get nothing as if they have never existed.

If you are fine with the above, nothing needs to be done, the AI will scrape your content for sure. Otherwise, you should start placing licensing terms on your content.

ADD REPLY
1
Entering edit mode

Maybe a site-wide licensing could be better? I think the website contains a lot of knowledge from a diverse group, some members are no longer active users but their content is still here.

ADD REPLY
0
Entering edit mode

I would not want to impose my views on anyone else - especially not retroactively. We'll have to watch the directions things develop.

ADD REPLY
0
Entering edit mode

From the about page

Creative Commons License All content on Biostar is licensed via the Creative Commons Attribution 4.0 International License.

Thus, the license is already imposed, provided that everyone who posted agrees to this clause. IMO, this should protect against derivatives without attribution. It won't do anything against wild west methods but I cannot imagine what would.

Edit: also, if we automatically agree that our content is under CC-BY 4.0 is it possible to add further restrictions in your profile?

ADD REPLY
0
Entering edit mode

Wouldn't that technically already be a copyright infringement of CC-BY's attribution clause? In my understanding, training a text-generating AI and then producing text with it is a remix. I would be ok with the AI producing text including a statement that 1) this text was generated by an AI to indicate how changes are made (and could possibly not claim a level of creativity to be put under CC-BY itself but you need to detail any changes made in a derivative) 2) name all (or at least significant) human contributors to the training corpus. Of course, there are practical issues with that, e.g. defining how changes are made may include to publish the AI code and training data as well as the weights given to training input. One may come to the conclusion that proper attribution in this case is very hard or impossible to achieve.

ADD REPLY
0
Entering edit mode

Unfortunately it is a bit of wild-west out there now and the point that @Istvan makes above is perhaps the main take home.

The AI should be smart enough to understand what it means :-) and how to separate those that allow access from those that don't.

One should get attribution to the content one produced that may be consumed by AI and re-presented elsewhere.

ADD REPLY
5
Entering edit mode
16 months ago
Mensur Dlakic ★ 27k

In an ideal world, we'd always receive proper credit for the knowledge we share with others. All of us know that's not the case, the frequent contributors of this website more so than others.

Starting from a premise that most of us will never be properly credited for our shared knowledge - I don't want to debate how to fix that problem - it may be worthwhile considering the opposite approach of what is suggested here. Everything we learn that isn't recorded on some medium, or verbalized to others, will be gone at some point. Since other people's memories will be gone at some point as well, tangible records of our knowledge may be the only long-term way of preserving them. We can decide to limit the access to that knowledge, or let others, including AI, benefit from it.

I realize that my argument is more global while Istvan's may have been more economy- or business-related, but it all converges in the end. It would be great to benefit from our knowledge while we still can - that is not in dispute. I think it is also valid to contemplate whether we want others to benefit from our knowledge even when we can't.

ADD COMMENT
2
Entering edit mode

I'm happy if my meager contributions are being scraped to make the universe a better place for all beings.

I am not happy if my meager contributions are being scraped to make a small subset of human beings wealthier at the cost of other beings (humans or otherwise).

Prof. Albert's suggestion is a good one. I would like to see the distinction between benefit and profit incorporated into it.

ADD REPLY
3
Entering edit mode
16 months ago

A similar debate has been sparked by Github's Co-Pilot a few months ago and those who haven't made up their mind yet, might be interested to read a comprehensive recap of some opinion pieces here.

Personally, I am not overly concerned about GPT3 in particular. Obviously, it can solve classroom assignments and high school teachers as well as university lecturers will have to worry about fair grading more than ever. On the other hand, every student should be aware that the purpose of homework is exercise for them. Evidently, one doesn't get in shape and grow abs just by watching fitness influencers on Instagram, and neither does letting GTP3 write your term paper help one's education. However, the risk of cutting corners & cheating should be balanced against the fact that having a personal tutor, who can explain things differently or in more detail, can also be very supportive.

That being said, I entirely depend on other people's knowledge for my entire life. I search at least a 100 times a day information on the internet and use many products daily. By buying and using those items, I profit from all the know-how that went into their manufacturing, without even being aware of all the development effort. I don't bother about the contents of my toothpaste and the regulations governing cosmetic ingredients, I have no clue about oil drilling, farming and agriculture, the challenges of running and maintaining a power grid, about logistics and supply chain management etc. I just trust that some people anywhere in the world responsibly take care of all of that for me. In this sense, GTP3 will yet become another product people can make use of, while blissfully ignoring the complexity that enabled it in the first place.

What bothers me more than the technology leaps are the shortcomings of our human brains and the fragility of our civilizations. GPT3 now closely resembles what Douglas Adams in 1987 envisioned in his novel Dirk Gently’s Holistic Detective Agency: A computer program that is able to compellingly justify any decision retroactively. This and other AI-generated content (like Deepfakes) enables more sophisticated manipulations than were possible ever before, yet also previous, way more primitive methods were sufficient to persuade people into fascism or to fall for totalitarian ideologies.

David Hume was already aware that our brains are biased to preferably trust information provided by our friends or peers and to cherry-pick whatever information can justify our emotional and moral preferences. Benjamin Franklin as well noted that "So convenient a thing it is to be a reasonable creature, since it enables one to find or make a reason for everything one has a mind to do". The connectivity and speed of the internet is in my opinion the much bigger risk than AI itself: Social networks in particular have potentiated the options for allurement and the possibilities for delusion, and it seems that even people in liberal democracies could forfeit a lot of societal progress out of wounded pride, stubbornness, vengefulness and other emotional reasons. As humanity, it is our eternal duty to strive for unity, cooperation & cohesion and actively resist dichotomous thinking and groupish sentiments. The ability to build communities and to enable cooperation across large groups in unprecedented complexity is what really sets us apart from all other species on this planet, yet it is also the most fragile asset of ours. Many civilizations have already collapsed in the course of history and solving the challenges of this century peacefully will be much harder than training AI to write a function or an assay.

Therefore, I personally hope that the 21st century will once not be remembered as the century of AI and technology, but predominantly as the century of flourishing social and political sciences, which eventually fostered global cooperation and democracy despite enormous global challenges.

ADD COMMENT
1
Entering edit mode

Interesting observations, some overlap with what Mensur Dlakic wrote in another answer.

I do think however that attribution and following specific people's advice is what shaped my skills and career. My main beef with the AI is exactly that it does not attribute what it learns to the people it has learned it from - and with that undermines the value of that person.

It is like drawing an image in the style of "living illustrator with an novel amazing style". Why hire that person if you can copy (and with that steal) what they do? And once everything looks like that, the value of that novel style is gone; it undermines the person that invented that and could have made a living from what they've created.

ADD REPLY

Login before adding your answer.

Traffic: 2455 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6