BOBASH: "Problém je, že AI funguje jinak než člověk a není schopná si poradit s nepředvídatelným nebo nepředvídaným."
a) to neni pravda, ad nepredvidane - jsou AI programy, ktery porazili mistry sveta v sachu a tusim v go. tam zadna predvidatelnost protihrace neni. mohl bys namitat, ze se tam drzi pravidel, tak:
b) podivej se na AI a synchronizaci dronu - youtube, prefektne si poradi s nepredvidatelnym
"Četl jsi ty články, co jsem linkoval, hlavně ten v PC Magu?"
ted uz ano :)
1) je to nazor, PC mag, neni vedecky zamereny casopis, mohl bych tam za urcitych okolnosti prispet i ja. jako nazor k diskuzi to beru. trocha kritiky:
a) "deep-learning systems become unreliable when encountering new situations."
tohle ale dela clovek taky. je potreba akorat nastavit, co je vyssi priorita. vzhledem k tomu, ze vetsina lidi je diky evoluci rizena pudem sebezachovy, stejne bychom meli nastavit priority algoritmu v takove nejiste situaci (napr. zpomalenim, zastavenim). tohle se da naprogramovat i modelovat
b) "The problem comes with unexpected situations that don’t closely match the cars’ training regimes.”
dtto
c) "But while Tesla cars will probably never mishandle “white tractor against a bright sky” and “concrete barrier in the middle of the road” situations again, such edge cases—as rare incidents are known—are too numerous and can’t be predicted in advance."
dtto
d) “People in the self-driving-car industry talk about the ‘long tail’ of unlikely situations—ones that do not normally appear in day-to-day driving, so are unlikely for any given vehicle to encounter, but are so numerous that each ‘tail’ situation will occur for some car somewhere on a regular basis,” Mitchell says. “It’s impossible to train an AI system on all such situations.”
vzhledem k tomu, ze tesla doted jezdi s ridicem, lze takove cases oznacovat pomoci AI pro supervised training clovekem, ktery umele v datech rozhodne, co se ma stat. je to stejne, jako rizeni google algoritmu pro vyhledavani - roky prace a manualnich zasahu
Human drivers don’t need to know everything to make reasonable decisions when facing new situations. We handle edge cases by tapping into our knowledge of the world and people. We know that an unattended child might risk running in the middle of the street. We know to drive cautiously when the car in front of us is swerving dangerously, indicating that the driver is either drunk, sleepy, or distracted. And we know slippery roads are dangerous, even if it’s our first time driving on one.
slippery roads atd. se daji merit pomoci senzoru vlhkosti, stejne tak tma, dest atd. stejne jako clovek za deste zpomali, muze se algoritmus "ohnout" pomoci parametru vlhkosti, napr. snizit rychlost, prudkost zataceni, brzdeni... tohle je strasne easy, nic sloziteho. je to stejne tak jako uz ted s dronama, nebo samonavadecima raketama ve vetru atd.
----
muzu pokracovat dal, pokud si prejes. ale ty si spis prejes, aby to bylo neresitelne :)