MICHIGAN: Nejsem odborník ale asi bych úplně netvrdil že to nepůjde, tady o tom třeba mluví dost zevrubně a různé formy data poisioningu vypadají jako dost reálná věc.
"So for example, the canonical, like adversarial example type is you have an image, you add really small perturbations, changes to the image, it can be so settled at to humanize, it's hard to is even imperceptible, imperceptible to human eyes. But for the uh for the machine learning system then the one without the perturbation, the machining system can give the wrong, can give the correct classification for example, but for the perturbed division, the machine learning system will give a completely wrong classification And in a targeted attack, the machining system can even give the the wrong answer"
Dawn Song: Adversarial Machine Learning and Computer Security | Lex Fridman Podcast #95https://youtu.be/HhY95m-WD_E?t=1045