Tuesday, May 20, 2014

Hey, What Could Go Wrong?


Information and information technology is growing at an exponential rate.  Artificial intelligence (AI) shows promise to change how we do business, practice medicine, and operate our households.  Advances in knowledge about the human brain are now being applied to computing.  Computing systems are being built based on how the biological nervous system works.  Computers will be able to learn from experiences, consider the information for use in different situations, and even learn from their mistakes.  Pretty soon, personal assistants, bookkeepers, and data entry employees may be digital employees.

But what if the AI was the boss?  A Japanese venture capital firm, Deep Knowledge, just named a robot to its board of directors.  The artificial intelligence, named Vital, was elected to the board because of its superiority in identifying market trends; trends “not immediately obvious to humans.”  The AI will eventually get to have an equal vote on all financial decisions. 

If this Vital is smart, I bet it will vote to bring some of its AI buddies on board.  Wait!   …it is smart- super smart.  Of course it will … and then- out with the slow humans!  Humans will just delay optimum allocations and slow processing to their abysmally inferior pace.  Who wants to wait all those additional nanoseconds? 

I don’t want to hurt any AI’s feelings, but I think I just might find it irksome being fired by my company’s new software package. 

Last month world renowned physicist Stephen Hawking gave the opinion that machine superintelligence could be the most significant thing to ever happen in human history – and possibly, the last.  Hawking and colleagues warn:

One can imagine such technology outsmarting financial markets, out-inventing human researchers out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.’

At the Centre for the Study of Existential Risk at the University of Cambridge http://cser.org/ these possibilities are being considered:  the idea that “developing technologies might lead – perhaps accidentally, and perhaps very rapidly, once a certain point is reached – to direct, extinction-level threats to our species.”

Well, I personally hope that doesn’t happen.  Up till now I had just been worrying about asteroid impacts or annihilation through earth’s processes.  In any case we will soon have the need for a new genderless pronoun for our digital comrades.

…and, possibly, considerations for a new special interest group.   








No comments:

Post a Comment