Hi - we are seeing the evolution of many GPT engines that do not have the ethical boundaries of more main stream these for instance are being trained on malware code and phishing emails to craft higher quality targeted attacks
Do you see a future where GPT engine providers will be forced to legislate the use of their technology (perhaps via an ethics charter or suchlike) or is it already too late and the genie is out the bottle?
I think, personally it's the later - as there's so much freely available AI engine tech already out there in the wild to use. But perhaps the later generations of ChatGPT and similar tools can be distributed with more safety in their design, depending on if there's external pressure to do so. It's a thought anyway.
as you say genie may be out of the bottle - on one of the other questions I think a comment was made about arms race - and I think that we will see this for a while
and where controls are implemented/mandated - the likelihood is that attackers will look to circumvent them
Thanks Rich, there's also the point from Paul and legislation will always be reactive / after the point, so yes the arms race point is well made
A cookie is a small file of letters and numbers that we store on your browser or the hard drive of your computer if you agree. Cookies contain information that is transferred to your computer's hard drive.
These are cookies that are required for the operation of our website. These essential cookies are always enabled because our website won’t work properly without them. You can switch off these cookies in your browser settings but you may then not be able to access all or parts of our website.
These allow us to recognise and count the number of users and to see how users move around our website when they are using it. This helps us to improve the way our website works.