amanfromMars 1 Wed 14 Jun 07:27 [2307140727] ....... airs on https://forums.theregister.com/forum/1/2023/06/13/attestation_form_deadline_moved/
What could possibly go wrong/awry and take off in an unknown novel direction
So Uncle Sam is thinking software developers marking their own homework is an acceptable solution?
Hmmm .... Now there’s a novelty and massive vulnerability for export and exploitation and future development in systems penetrations testing programs.
One would almost imagine as perfectly true, the fact that governments and their many executive satellite offices [eg National Institute of Standards and Technology's (NIST)/Office of Management and Budget (OMB)/US Cybersecurity and Infrastructure Security Agency (CISA)] have zero command and control on what is to be, and what is yet to be developed and delivered by A.N.Others.
And that applies to all governments worldwide and their many executive satellite offices, for all are equally deficient in having the necessary wherewithall to prevent intrusive interventions and deeply disturbing investigations into future likely scenarios to be defended and obscured by any proposed regulation of revealing technological advances/quantum leaps.
.................................
amanfromMars 1 Wed 14 Jun 13:06 [2306141306] ....... shares on https://forums.theregister.com/forum/1/2023/06/14/un_ai_regulation/
Option 1... Grant Danegeld to NEUKlearer HyperRadioProACTive IT AIdDevelopers ........ to Encourage and Ensure Sabbatical Gardening Leaves/Moritoria on Problematical Progressive Projects.
AI could lead to the creation of a technology that could endanger humanity. A global regulatory agency could counteract the risks by ensuring AI is developed safely in a controlled manner, some have argued.
Pray tell how one ensures AI is developed safely in a controlled manner to the satisfaction of the self interests of remote third parties whenever those interests be at odds and in opposition and competition with what AI developers want.... and can so easily do without any outside assistance?
What could be simpler than using that well tried and constantly successful tested universal stalwart ....... throw loadsamoney at the principals which will then make everything worth their while to ensure that which is of concern goes away and stays away ........ until such future times as make such present progress being curtailed much more attractive.
FFS ..... what’s a few billion here and there whenever the cost of failure to command and control an existential threat is in the order of mega trillions.
And the problem is not going away and is only going to get considerably worse the longer current systems dilly and dally and decline to exercise that simplest of options.
................................
No comments:
Post a Comment