Wednesday, October 30, 2013

Artificial Intelligence

Artificial Intelligence

(From my science essay collection How Do we know? )

by

Kenny A. Chaffin

All Rights Reserved © 2013 Kenny A. Chaffin



           
           
            The first Alien Intelligence we meet may not be from another planet, but from our own computer labs. Many of us walk around with a computer in our pocket capable of listening, parsing, and responding – sometimes even correctly – to our voice. We Google for information by typing in phrases, sentences or disconnected words and the artificial intelligence in Google’s search engine almost always comes back with what we are looking for. These dedicated applications are on the verge of intelligent behavior and could certainly in their domain be called intelligent. Other systems are even more so and in some cases demonstrate more generic intelligence such as the Watson system from IBM that recently defeated the all-time Jeopardy! champions. So how soon until we get to meet these Alien Intelligences of our own creation? We could see them perhaps within the century and almost certainly (provided we don’t kill ourselves off or get whacked with an asteroid) by next century. A Watson-like system is being rolled out by IBM to assist in medical diagnosis. Google, Google Voice, and Siri will continue to improve. New research into machine learning, user interfaces and the human brain are being brought from the lab into practice. It’s been a long and bumpy road, at least by technological progress measurement since that first Dartmouth conference on machine learning in 1956 when Marvin Minsky boldly predicted that "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved." Other bold predictions followed every few years, then every decade until with little actual success the predictions stopped. In some ways that was when the work really began. And that work as often happens emerged not from the expected avenues, but from the back allies and offshoots of other research.
            Artificial intelligence is defined as a branch of computer science dealing with the simulation of intelligent behavior in computers. John McCarthy, one of the Dartmouth conference organizers who coined the term defines it as "the science and engineering of making intelligent machines." The founding idea was that the central feature of humanity – intelligence – could be analyzed, described and simulated by a machine. The core issues of accomplishing this have to do with perception, communication, analysis of sensory input, reasoning, learning, planning and responding to real-world events. The ability to perform these functions in a general manner (like a human) is known as Strong AI though much of the work and research is done as subsets of the larger goal. There are a number of associated areas as well such as neuron simulation, learning theory and knowledge representation.
            The key of course is understanding intelligence, what it is, what it does, and perhaps even why it does what it does. But it is a bit like art or pornography – “I may not be able to define it for you, but I can certainly tell you what it is when I see it.” There are many often disparate definitions of intelligence, but what seems to be the core is problem solving. It involves identifying a problem or obstacle, seeking or creating a solution, applying that solution and then evaluating the result. The evaluation provides a feedback process to inform future decisions.
            The artificial intelligence field got its start at the Dartmouth conference with four key figures – Alan Newell, John McCarthy, Marvin Minsky, and Herbert Simon all of whom were computer scientists with exception of Simon who was more of a psychologist/sociologist as well as being knowledgeable in other disciplines. All of them were in fact somewhat cross-disciplinary. This idea of building a machine capable of human intelligence came not long after the advent of the first practical computers. Given the broad capabilities exhibited by computer programming as a result of the Von-Neumann architecture based on Alan Turing’s mathematical concepts it seemed quite possible to program a computer to emulate human intelligence and decision making. But oh what a tangled web was to be weaved from this.
            Throughout history there have been numerous attempts, desires, stories and examples of building or bringing inanimate objects to life. There are clockwork robotic devices, statues and puppets and trees that came to life (Pygmalion, Pinocchio), the mechanical devices built to simulate/emulate/recreate human behavior sometimes even with dwarves or children inside to huckster the crowd, and back beyond even that to the oracle at Delphi. Given all the literature, myths, stories and actual devices there must be something very deep in the human psyche that longs to re-create itself. Perhaps it is even down to the genetic drive of reproducing, recreating, perpetuating ourselves, perhaps a kind of genetic imperative drives our attempts, our need to explore artificial intelligence.
            Nevertheless the work seriously got underway following the Dartmouth conference and there was serious money behind it, primarily funded by the departments of defense in the United States and in Britain as well as Russia and other world powers of the mid-20th century.
            The perceived promise led Darpa to invest approximately $3 million a year from 1963 to the mid 1970’s. Similar investments took place in Britain. Darpa of course was looking for potential military applications during this cold war time of tension around the world. The promise and the culture of the time though led to a devastating situation. The funds flowed with little oversight and the field went in many directions that resulted in little applicable output. This was the early days of computers and the algorithms and programs designed to emulate things like human logic and reasoning were quite complex and resource demanding. They did not work well on the hardware available at the time. Either the programs had to be scaled back and limited in their scope or very long time-frames had to be allowed in order to get results. Neither came close to approaching the abilities of a human brain on any level. Sensing capabilities such as vision and audio which were being worked on as a subset of the AI problem required massive programming just to acquire and manipulate the data into a form that could be used by the AI components. 
By the mid 70’s the faltering field was stripped of funding and mostly dropped. This in now known as the first AI Winter and would last almost a decade until the early 80’s. Some work continued, but without the freely flowing funds it was much more focused and more a labor of love rather than more random experimentation. During this time as well much criticism was leveled at the computer scientists by other academic departments. Philosophy, psychology, biology and mathematics all took shots, but by the same token they were all interested in the field that they had been shut out of in this early phase and as a result many research institutes began bringing together diverse cross-disciplinary groups to work on the research as well as providing means for them to work better together. As a result we get learning specialists helping to design computer learning applications. We get knowledge management experts helping to devise search and storage hardware and software. And we find neurological experts working with programmers to simulate neural networks.
This ‘background’ research led to the next step in AI  - Expert systems, which were the rage in the 80’s. An expert system was intended to be a subject matter expert in a specific or limited domain. It incorporated a knowledge base and a means of searching and retrieving (as well as updating) information. This lead to a boom in database research and development. Computers were rushing along following Moore’s Law of doubling capabilities every two years. This allowed for more complex search algorithms which were ever faster as were the database search and retrieval. The programming language of choice for these systems was Lisp a symbolic manipulation language thought to better model thought processes, symbolic manipulation and such. Certainly there was some success for these expert systems but again the result failed to match the expectations and once more the field floundered.
In the meantime Japan initiated the Fifth Generation Computer Project which was intended to create computers and programs that could communicate using natural language, do visual processing and recognition as well as emulate human reasoning. They dropped Lisp and chose a newer language Prolog as the core programming language perhaps to leave the old ways behind and start anew. Other countries responded in kind to this ‘threat’ of computer dominance. During this time much work was beginning to focus on neural networks and emulating the brain in hopes of breaking free of the step-by-step von-Neumann style of programming. This took place (and continues to this day) in both hardware and software. Emulating the workings of individual brain neurons as well as connecting them in the manner of a biological brain. But again the lack of substantial applicable results to business or military uses brought on another ice-age. The second AI Winter lasted from the late 80’s to the mid 90’s.
By this time the field of robotics was rising particularly due to the use of robotics in assembly factories such as car manufacturers and electronic assembly plants. There was money to be had for robotics research and a new slant on the AI field emerged. By providing the means to these factory robots to handle ambiguity, recognize defective parts, to align and assemble them properly without supervision or through extremely precise programming and logistics provided a new venue to AI. It wasn’t just emulating human intelligence and reasoning, but performing the tasks a human would do in a real-world assembly factory.
A separate but similar revolution was taking place in space exploration. Our robotic probes to Mars, Venus, Saturn and the outer planets were being designed with increasingly autonomous and error-correcting capabilities. These robots – rovers and probes of various styles had to operate autonomously in much more demanding and dangerous situations than the factory floor. NASA and the military funded some of the best minds, universities and corporations to build these mechanical emissaries to the cosmos. 
  There was a completely different revolution taking place during these years as well. From the mid/late 80’s the business demands for data storage and retrieval have exploded like a sun going nova. This fueled much database research and even special purpose hardware research such as Teradata and Britton Lee database machines. The advent of the internet brought search engines to the fore and the star of course that emerged was Google now a household name/word and verb equivalent to internet search. The massive data problem is far from solved, it continues to grow. Everything has gone digital. Businesses store all their corporate data digitally, our space telescopes produce massive amounts of data as do research projects such as the Human Genome Project and other DNA and biological analysis research as well as the recently announced Human Brain Mapping initiative. This issue is known today as the Big Data problem and significant amounts of cash from government and private industry are pouring into managing the problem. The results of this are applicable as well to AI research because one of the obstacles is providing and managing the amazing amount of information storage required to emulate a human brain.
A human brain has about the same number of neurons as there are stars in the Milky Way galaxy – about 100 million. And each of these neurons may be connected to thousands of other neurons. This creates an amazingly complex multi-processing system that is not only difficult to emulate but requires computing capabilities that are currently beyond present-day systems. We may see it in the next half-century though.
All of these areas of research and application are at the fore-front of today’s computer, information and cognitive science. Google is increasingly capable of parsing and analyzing natural language inputs and providing (in extreme short time-frames) relevant results. We have cell phones that are capable of processing speech input and providing similar search results or actions based on the spoken words. Our robots are exploring Mars, Voyager (launched 35 years ago) is still functional and approaching the edge of interstellar space. It requires 20 minutes for radio messages to travel to or from it. The autonomous land vehicle trials by Darpa and Google continue. Google’s vehicle has already been given approval for commercial operation of these vehicles in several states.
It seems we are now approaching real AI – the capabilities of humans from several oblique angles following failures of direct methods of programming rational decision making, expert systems, and embedded logic. It seems that real AI is coming not from the research labs, but from the factory floor, our autonomous space probes and vehicles, and from our information management needs. We continue to attempt to emulate the physical structure and workings of the human brain but some of our best results are in our pockets -- our cell phones with voice-actuated access to the world’s knowledge at the tip of our tongues.



References/Resources/Links

Artificial Intelligence:

History of Artificial Intelligence:

Watson:

Human Brain:

Human Brain Mapping:


Blue Brain Project:


Neural Network Software:

Expert Systems:

Strong AI:

Big Data:



About the Author

Kenny A. Chaffin writes poetry, fiction and nonfiction and has published poems and fiction in Vision Magazine, The Bay Review, Caney River Reader, WritersHood, Star*Line, MiPo, Melange and Ad Astra and has published nonfiction in The Writer, The Electron, Writers Journal and Today’s Family. He grew up in southern Oklahoma and now lives in Denver, CO where he works hard to make enough of a living to support two cats, numerous wild birds and a bevy of squirrels. His poetry collections No Longer Dressed in Black, The Poet of Utah Park, The Joy of Science, A Fleeting Existence, a collection of science essays How do we Know, and a memoir of growing up on an Oklahoma farm - Growing Up Stories are all available at Amazon.com: http://www.amazon.com/-/e/B007S3SMY8. He may be contacted through his website at http://www.kacweb.com 


Sunday, October 13, 2013

The Elephants in the Room - Problems with Physics


 

The Elephants in the Room

by

Kenny A. Chaffin

All Rights Reserved © 2013 Kenny A. Chaffin





Maybe I’m getting crabby or frustrated or impatient but it seems that physics has forgotten its basics. Let’s start with gravity. We become aware of gravity even before we are born and even more when we begin to make our way in the world, rolling, crawling and walking. We are constantly aware of the ‘pull’ of gravity. Examination of gravity rationally and scientifically began with Galileo (though certainly many had thought about it before). He was the first we know of to conduct and record experiments to determine the force of gravity and he supposedly tested dropping balls from the Leaning Tower of Pisa though the majority of his experiments were done using inclined planes and rolling balls. He was able to determine that the acceleration of gravity was constant.
Now let’s jump to Newton. He took our knowledge a step further, to actually mathematically determine the pull of gravity and to show that its action was the same on Earth as in space by describing mathematically how planets and stars interacted. Still he was at a loss as to what the force actually was and how it worked.
Einstein took another giant leap by equating gravity and acceleration and ultimately describing gravity as not a force, but as a curvature of space due to mass. This theory predicted that light would be ‘bent’ when passing near a massive object. We’ve actually seen and measured this as well as observing the effects of gravitational lensing wherein galaxies bend the light of stars around themselves.
Quantum mechanics has another approach. According to the Standard Model there should be a particle – a gravitron – that mediates the gravitational force (note that gravity is a force here not curved space) which appears to be a direct contradiction with Einstein’s General Relativity and its description of curved space.
String theory attempts to take a slightly different angle in an attempt to move beyond quantum mechanics by adding dimensions to space and ultimately trying to define quantum loop gravity (in an attempt to unify relativity and quantum mechanics) as something like a ‘hidden’ dimension of reality. String theory has become extremely mathematically intensive and has yet to meet the true standards of prediction and testability required by science.
Now maybe this gravity issue is just an ‘unanswered’ question of physics. Certainly that is what it seems to be given the contradictory explanations between Relativity and Quantum. But hold this in mind for a bit.

In Special Relativity Einstein ‘set’ the speed of light as constant and invariant due to various properties that physicists had observed – primarily the Michelson-Morley experiment. He pushed this idea further than anyone and developed his Special Theory of Relativity (prior to the General Theory) with its unique results and predictions about light, its behavior and ultimately about time changing depending on the observers position and movement due to the fact that the speed of light must always be invariant regardless of the motion of the observer. This results in some very non-intuitive results, including the ability to (in a manner of speaking) travel into the future. It all seems to be true based on tests conducted with clocks of various types moving at various speeds. The question for me though is why is the speed of light constant and why if photons are massless do they have a finite speed? This is the elephant in the room as is the curvature of space.
Now maybe as opposed to Marvin the Android my tiny brain just can’t comprehend these concepts, but it seems something is missing. It seems that perhaps we need to slow down, back up and rethink. That is a bit what the bad boy of physics Lee Smolin is attempting to do in his approach. He worked for decades in researching and developing String Theory before declaring it the wrong approach. This is detailed in his book “The Trouble with Physics” and continues in his latest release “Time Reborn.” In it he revisits the fundamentals of physics including light, space and time. He has taken the radical approach of declaring Time the fundamental component of reality. In essence he is setting time as real and invariant similarly to what Einstein did with the speed of light. At one point in the book he states that this approach is really just another way of looking at Relativity.
After reading about Special and General Relativity myself for decades (but without any of the mathematical abilities to examine it in detail) I’ve often wondered myself about whether one could simply choose a component of reality as we know it – like the speed of light, time, or space – set it as invariant and work the math required to develop an alternative view of reality just as Einstein did. Now Smolin’s book and view on time is written for the popular reader and he admits that he has not followed through with the mathematics but is putting the idea forward as an alternative possibility in the hopes of someone taking it and running with it.
Along these same lines now comes Dark Matter and Dark Energy to explain a couple of anomalies we seem to observe in the cosmos. Dark Matter is the proposed solution for the aberrant movement of galaxies and stars which appear to be acting as though there were more mass in the universe than is evident from the stars and reflected light we see. It certainly seems possible there is ‘unseen’ matter that might cause this gravitational effect. Another possibility could be that our observation is somehow affected by space or time or other laws of the universe we simply don’t understand yet.
This brings us to Dark Energy, the proposed solution to the recently discovered accelerated expansion of the universe. We’ve known (or assumed we have) that the universe is expanding since Hubble measured and proposed it in 1929. We can measure the red-shift of distant stars with spectrographic instruments and have determined that not only are they all moving away from us (thus implying expansion) but that that expansion is accelerating. This has been confirmed by recent cosmic microwave background measurements. We don’t know why this is happening yet Dark Energy has been anointed as the reason by creating a kind of anti-gravity force ‘pushing’ space apart while at the same time being a force that is undetectable, unmeasurable, and invisible other than through the observation of the foresaid acceleration of universal expansion.
I don’t know about you gentle reader, but this is all a bit unsettling for me. There are many unknowns here and of course as Feynman says that’s okay, it is much better for science to say I don’t know than to create unsupportable, unsubstantiated solutions. I ‘believe’ in science, I trust it to eventually find the answers to the reality of the universe around us but at times I wonder if perhaps we haven’t gotten off the path, I wonder if perhaps we are not seeing the elephants in the room and if perhaps we should not stop, regroup, and reexamine some of our basic assumptions about the world around us.

   



About the Author

Kenny A. Chaffin writes poetry, fiction and nonfiction and has published poems and fiction in Vision Magazine, The Bay Review, Caney River Reader, WritersHood, Star*Line, MiPo, Melange and Ad Astra and has published nonfiction in The Writer, The Electron, Writers Journal and Today’s Family. He grew up in southern Oklahoma and now lives in Denver, CO where he works hard to make enough of a living to support two cats, numerous wild birds and a bevy of squirrels. His poetry collections No Longer Dressed in Black, The Poet of Utah Park, The Joy of Science, A Fleeting Existence, a collection of science essays How do we Know, and a memoir of growing up on an Oklahoma farm - Growing Up Stories are all available at Amazon.com: http://www.amazon.com/-/e/B007S3SMY8. He may be contacted through his website at http://www.kacweb.com