Wednesday, 20 August 2025

250820

amanfromMars [2508200920] ........ shares on https://www.nationaldefensemagazine.org/articles/2025/8/19/algorithmic-warfare-protecting-ai-models-from-adversary-tampering

If we 'plant' or 'poison' the data that it learns from, the LLM will 'learn' the wrong thing, and we may be able to manipulate it. ..... PAUL at 12:18 PM

Or more worryingly and increasingly problematical, attempted manipulation may prove to be evident and self-destructively revealing.

The mistake not to make is to believe AI models are not able to be significantly smarter and much quicker learners than humans ‽ . That, however, may be too big a quantum leap for present executive systems administrations to make .... and thus be they catastrophically vulnerable to all manner of strange and entangling alien and Remote Access Trojan type exploit.

..................................

No comments: