Thursday 30 November 2023

231130

amanfromMars 1 Wed 29 Nov 17:28 [2311291728] ....... points out a very likely future problem on https://forums.theregister.com/forum/1/2023/11/28/ai_agents_can_copy_humans/

Re: Surely the real ethical problem with AGI ...

Surely the real ethical problem with AGI ...... is less the threat to us (mediated by limiting its access to weaponry, manufacturing, etc) and more the concern that we will have created an enslaved sentient creature. .... John H Woods

I can imagine more than just a few human beings, John H Woods, having very grave concerns indeed should humans ever create to enslave a sentient creature.

Indeed, any humans who might think that be a smart move are most likely to suffer the discovery that they have made an extremely powerful enemy against which/whom they will fail to triumph and survive any contact with.

----

amanfromMars 1 Thu 30 Nov 15:38 [2311301538] ........ begs to differ on https://forums.theregister.com/forum/1/2023/11/28/ai_agents_can_copy_humans/

Not all bots are equal :-) .......

I'd rather have a web site which lets you find useful information rather than fob you off with a wholly useless* bot.
-A.
*Is there another kind? .... captain veg 

Oh yes. Of course there is/are ..... with some being likely much smarter than the average human Joe and Janet too ..... although admittedly that is no high bar to leap over, is it

.................................

amanfromMars 1 Thu 30 Nov 07:22 [2311300722] ..... asks and speculates on https://forums.theregister.com/forum/1/2023/11/30/sam_altman_openai_ceo_restored/

To the Valiant Victor the Worthy Venerable Viral Spoils.

Is Uncle Sam Altman also back at the helm of the OpenAI/Microsoft vessel because of the understandably secret [as in proprietary intellectual property] in-house research which led to others declaring him as not being “consistently candid in his communications,” now being outed as a leading AI development being bred and groomed for public deployment and new virtual flight operations from a UK/USA Google DeepMind facility ..... https://www.theregister.com/2023/11/28/ai_agents_can_copy_humans/ ....... and thus is there something of a cyber space race on between the two mega metadata base protagonists to be first past the post as often as is possible to pick up any sponsors' prizes, technocrat rewards, noble and Nobel awards?

Strangers things have happened whenever allies tangle as competitors and become Sp00Key Quantum Leap Entangled ...... :-) Licensed to Thrill.

----

amanfromMars 1 Thu 30 Nov 08:06 [2311300806] ..... shares on https://forums.theregister.com/forum/1/2023/11/30/sam_altman_openai_ceo_restored/

Re: To the Valiant Victor the Worthy Venerable Viral Spoils.

And here's some opinion providing similar context to the above alien speculation on the OpenAI shenanigans ..... and from someone well known for not suffering the useless fool.

The xAI founder warns that "AI is more dangerous than nuclear bombs," when asked about about OpenAI (the company he co-founded):

“I have mixed feelings about Sam,” Musk said about CEO Sam Altman, who was recently ousted and reinstated.

“The ring of power can corrupt.” 

Musk added that he wanted to know why OpenAI cofounder and chief scientist Ilya Sutskever "felt so strongly as to fight Sam."

"That sounds like a serious thing. I don't think it was trivial. And I'm quite concerned that there's some dangerous element of AI that they've discovered," he added.

Musk said he believes Sutskever has a "strong moral compass."

"He really sweats it over questions of what is right," Musk told Sorkin.

"And if Ilya felt strongly enough to want to fire Sam. Well, I think the world should know what was that reason." ..... https://www.zerohedge.com/markets/they-can-go-fk-themselves-musk-slams-advertising-boycott-blackmail

Now, .... regarding these more dangerous than nuclear bombs/NEUKlearer HyperRadioProACTive IT Bombes.? Who/What do you think is in command and control of them and able to drop them on humanity with impunity/without suffering colossal collateral damage?

.........................................

 

No comments: