The following is a translation of a discussion with three students from ESIEE: Philippe Glories, Erwan Le Guennec, Yoann Champeil.
- According to you, where mostly does one finds IA in everyday’s life ?
Large-scale applications of distributed AI are scarcer, as far as I know.
- May one talk about “strong AI” (as opposed to “weak AI”, see http://en.wikipedia.org/wiki/Strong_AI) to deal with autonomy ? (The autonomy would require self-awareness to manage oneself)
I would present a different type of distinction: made (or “built”) AI vs emergent (or “born”) AI. In the first case, a piece of software produces (intelligent) solutions that are predictable as a consequence from the original design. In the second case, the nature of solutions is harder to foresee, they emerge from the components together with the reflexive nature of the application (meta-model). It is another way to look at this weak/strong difference.
- Is the creation of autonomous AI already feasible ? Do we hold the technical means ? If missing, are we about to get them in a few years ? Are there any special theories that are requires to develop these ?
- Do you feel that the creation of autonomous AI is advisable and desirable ? From an industrial perspective? From a society perspective? From a scientific perspective ?
This is a large and difficult question !
I would answer positively, since I believe that only a strong AI approach will enable us to break the complexity barrier and to attack distributed AI problems. This is especially true for Information Systems issues, but I believe this to hold for a more general class of problems. To say it differently, solving successfully distributes problems may require to relinquish explicit control and to adopt an autonomous strategy (this is obviously the topic of this blog and of Kelly’s book).
There are associated risks, but one may hope that a rigorous definition of the meta-model, together with some form of certification, may help to master those risks.
Obviously, one of he risks, both from an industrial or social perspective, it to see the emergence of systems with “too much autonomy”. As a consequence, a research field that needs to be investigated is the qualification of “degrees of freedom” that are granted to autonomous systems. A precise answer with collide with classical indecidability problems, however abstract and “meta” answers may be reachable.
- From a philosophical point of view, do you see autonomous artificial intelligence as a threat to mankind ?
- To summarize, would you qualify yourself as an opponent or an advocate of autonomous AI ?
(1) On the small scale, components should be built using a “mechanical vision”, with proper specifications, (automated) testing and industrial quality using rigorous methods. When “intelligent” behaviour is needed, classical AI techniques such as rules or constraints should be used, for which the “behavioural space” may be inferred. Although this is just an intuition, I suspect that components should come with a certification of what they can and cannot do.
(2) On the other hand, large-scale systems, made of a distributed network with many components, should be assembled with “biomimetics” technology, where the overall behaviour will emerge, as opposed to be designed. My intuition is that declarative, or policy-based, assembly rules should be used so that a “overall behavioural space” may be defined and preserved (which is why we need certified components to start with). The issue here is “intelligent control”, which requires self-awareness and “freedom” (autonomy).
1 comment:
autonomous AI is the only approach to resolve complex problems, for which a solution is really needed
Well... is it ?
What are these complex problems for which a solution is really needed ?
What are we achieving with ever-increasing automation ?
Why are we doing all this ?
Setting up autonomic computing, AI, etc. is indeed a fantastic goal, a fascinating intellectual game. But what for ?
Shouldn't we slow down a minute ? Our history could perhaps be summarised as follow : "if I can imagine it, it must be feasible ; if it is feasible, let's find a way, and let's do it ! To hell with consequences, we'll see !".
I guess my point is, again, on acceptance. What would be the global benefit for the man in the street of autonomic computing ?
Another (provocating!) question : is the "quest for AI" only a new quest of the "philosopher's stone" ? It sounds to me like "magic" : problems are getting too complex, let's invent a (magical) system that will solve them for me, I don't care how, as long as it works.
We all know that Liberty doesn't exist without (politically agreed) Limits. What are the Limits of Science ?
Or to say it differently, what is the process that sets Limits to Science ?
Post a Comment