The Future of Computers and Artificial Intelligence

(Featured Article by Freelance Writer Dario Borghino, graduate in telecom and software engineering)

In the last 50 years, the advent of computer has radically changed our daily routines and habits. From huge, roomy, terribly expensive and rather useless machines, computers have managed to become quite the opposite of all the above, seeing an exponential growth in the number of units sold and, stunningly, usability as well. If all of this happened in the first 50 years of computer history, what will happen in the next 5 decades?

Moore’s Law is an empirical formula describing the evolution of computer microprocessors which is often cited to predict future progress in the field, as it’s been proved quite accurate in the past: it states that the transistor count in an up-to-date microprocessor will double each time every some period of time between 18 and 24 months, which roughly means that computational speed grows exponentially, doubling every 2 years.

But we already have fast computers working with complex applications requiring fairly sophisticated graphics with acceptable CPU usage: so, once we get there, what could we use all of that calculating power for?

In the newborn science of computer algorithms, there is a class called ‘NP-hard problems’ which are also sometimes referred to ‘unacceptable’, ‘unsustainable’ or ‘binomially exploding’. Those are a group of algorithms whose complexity grows exponentially with time. An example of NP-hard algorithm is the one of finding the exit of a labyrinth: it doesn’t require much effort if you only find one crossing, but it gets much more demanding in terms of resources when the crossings become 10, 100, 1000, until the point where it becomes either impossible to compute because of limited resources, or computable, but requiring an unacceptable amount of time.

Many, if not all, of the Artificial Intelligence related algorithms are now nowadays extremely demanding in terms of computational resources (they are either NP-hard or anyhow involve combinatorial calculus of growing complexity), in addition to the fact that, in the AI domain, an ‘acceptable time’ to return an answer is much shorter than many other cases — you want the machine to be answering stimuli as quickly as possible to make it effectively interact with the world around it. Therefore, while it wouldn’t be a definitive solution, the constant progress in terms of computational power could boost the progress in the fields of AI in a very significant way.

Will we ever be able to accomplish a general purpose artificial intelligence? It’s probably too early to answer, but certainly, if we look at the results of todays technology, they look more than encouraging.

Different companies are working on different aspects of this technological dream: Honda is probably the most advanced in terms of mobility and coordination, with their ASIMO robot series, while if we look at the software side, the two most advanced companies are probably CyCorp for their impressive knowledge-based language recognition engine, and Novamente in terms of general intelligence. How long until we see concrete results, then? CyCorp spokesmen say they are confident they will be able to build a ‘usable’ general purpose intelligence using their language recognition engine within 2020, while others talk more realistically about 2050.

It would be hard, or rather impossible, to say who (if any) is right, but what seems certain in today’s situation is that the AI industry is still too fragmented, we are still missing a centralized coordinator who might be able to integrate the varied and highly diversified technologies of today in a single creature, which right now seems the only possible way to meaningfully accelerate the progress of this industry.



Leave a Comment