I’ve been trying to read up on developments in Artificial Intelligence, my primary motivation for this has been in my own resurgent interest in the field. I studied Artificial Intelligence at the University of Birmingham and whilst my academic life was dominated by my interest in the subject, its something I lost touch with during the course of my professional career, with the exception of a stint dabbling using artificial neural networks at Rolls Royce in order to extrapolate trending in turbine engines over normal and prolonged usage.
Anyway I came across this Panel Discussion on google video. The panel discusses the question “What are the bottlenecks, and how soon to Artificial General Intelligence? If you have the time, then its well worth watching. I have to confess I was engrossed. To summarise the members of panel stated that the bottlenecks or obstacles currently preventing projects pushing towards AGI include:
- Lack of Funding
- Nature of current programming languages, which are viewed as being cumbersome to work with.
- Building an emergent system, rather than a system that can be incrementally tested.
- Not enough people involved in research in this field.
- Too much polarisation in terms of what researchers believe defines Artificial Intelligence, and the wildly different approaches adopted by researchers.
- The inherent complexity of building a system capable of the level of generalisation required.
- Too much research in the field focuses on building solutions to “toy” problems which arent compelling enough to convince investors.
- Our ignorance, how the hell do we build an intelligent machine? We dont even know what the goal is.
- The lack of a common ontology and vocabulary to discuss the subject.
I wont bore you with the panel’s wildly varying assessment of how long it will take some believed within the next decade whilst others believe it will happen towards the end of this century. One of the most interesting questions posed was “Have we reached the status of being a science?”, the only panelist who answered stated “No, we’ve always been an Engineering discipline” – and I think its true to say that its one that is divided into entrenched groups not willing or able to work with each other.
The AI research community is seemingly still haemorrhaged into advocates of Strong AI and others who advocate Weak AI. There are those who believe the solution lies in mimicking the human brain, if you imitate the human brain closely enough you’ll end up with a conscious intelligent creature since we ourselves are proof of this. On the other hand there are those whole believe in a purely engineered solution, using software to study and accomplish specific problem solving or reasoning.
I’m concerned that in the last 10 years it appears the divisions within this discipline have grown wider, however im encouraged that one of the overriding and recurring points in this video is everyones agreement that in order to move forward more collaboration is needed. I’ll be following the initiatives mentioned in this talk closely at http://agiri.org/
I did it find it amusing when someone commented during the discussion that AI researchers were perhaps too familiar with science fiction and perhaps that was part of the problem! 🙂