Pages

Tuesday, October 29, 2024

#Artificial_Intelligence - A Guide for Thinking Humans - #Melanie_Mitchell - Review

As titled, the entire book is an optimal guide for all thinking humans.  The author begins the book with a sketch of the history of AI research from its birth in the 1950s, and outlines its key figures and significant ideological branches. These are broadly classified as symbolic (conscious reasoning) and sub-symbolic (sub-conscious learning) with the latter being biologically inspired structures which learn patterns and rules via lots of data.

     AI is a technological endeavour, and like other big sci-fi dreams - deep space travel, cheap clean energy, trans-humanism - there is an enormous gap between our current capability and our vividly imagined end point (with most registering it as ‘fear’). It's a gap that's easy to dismiss while breathlessly fretting over superintelligence and singularities, but that gap is filled with some extremely difficult challenges that we currently have little idea how to approach, let alone solve. That is why, probably, data sets and data analysis is to the fore in all branches of science and technology.

 The author comes out with various examples of the data sets and training basis for the machines. (and the book is full of illustrations as well). We have heard about ‘Big Data’ but never realized that this would end up with such an extended benefit. It is perhaps for this reason that any data in general and scientific one in particular is being accepted for publication by most publishers. Earlier, this used to be the same data that would be rejected citing no novelty or creativity in the matter. If you have a repeat data the caveat for its publication would be a geo-tag (as the project would be new for that place and the results vary, credibly).

Here is the brief summary of the various sections in the book, rather guide:

 # The first and the simple start is ‘definition’ - “Define your terms or we shall never understand one another”. IQ which is measured in single scale can thus be differentiated with various dimensions – emotional, verbal, spatial, logical social etc.

 That GPS actually was shorted for 'General Problem Solver' rather than what we have assumed today was a surprise for me.

    The author explains the initiation of logical coding with a simple old school puzzle of two men with a man-eater trying to cross a river with one boat so that only two species are allowed in the boat or trip once or at a time and the man should never be left alone at the mercy of man-eater. The coding would then be working out various combinations using symbols, ifs, buts….

     Mitchell wonders over why we trust a review from a person who is a friend and give weightage to his opinion over others’. She meant to convey that the machine might not be able to analyze the ‘trust’ factor despite it having all the data!  There is a lot more discussion on perception of figures by the machine so that data alone (imagine all sorts of) cannot help it.

Getting along new ideas create lot of optimism breakthrough!

 # Neural network takes the second (entire) chapter with a lot of discussion on the ‘layers’ and is discussed with a view on human brain from which most of the inspiration for analyses is procured.  There is always an input layer and an output layer which are not hidden and what happens inside the hidden layer is based on data and subsequent logics. AI spring and AI winter intervene in between.

 Anthropomorphize is a language that machines might think is based on Turing’s test.

In this connection Ray Kurzweils ‘Text to Speech’ analysis is suggested to give futuristic prognostications machine.  His books “The age of spiritual machines” and “The singularity is near” is suggested for further reading.

# The concept of exponential growth is mentioned with a story of a Sage visiting a neighboring rich king and challenging to answer any question asked by the ‘Darbar’. The sage extracts promise from the king that he be given grains that double over the chessboard, each time he answers correctly. The king at first laughs over it and agrees only to realize in dismay that by the time second row of the chess (16 squares) get covered, about 65,536 grains got accumulated (~ 2 kg). Here it should make a difference between the man and machine. While the king realizes the wisdom of the sage, the machine might not!

# Raw info from pictures in obtained from simple queries like: who, what, when, where and why. This section has many examples that are simple as well. But when it comes to transformation into ‘wisdom’, the marvelous nature of the brain could never be understood. How brain transforms visual info into what the scene can be hard for the machine to accomplish. For instance, a dog with human is hard for the machines to recognize if the picture is not clear. So, the data scientists built a network called ConvNet (Convolutional Neural network), where the Shades of pictures are pixelated and the values given and evaluation is done.

# PASCAL gets a tribute as one of the early machine level language and is implemented by Amazon as Mechanical Turk which still requires human intelligence to work with for its market place.

 # That Facebook with which we were happier when it was started as a social platform had actually a hidden objective. It gathered all our data and registered a patent for classifying our photos with emotions behind expressions.  Using similar logic Twitter filtered pornographic images. And for the analytical part it required huge number of CPU’s and it was then that the Nvidia’s GPUS stock prices increased 1000% when ConvNet and ImageNet usage doubled.

#  Facebook says ‘thankyou’ because it was able to differentiate persons using imaging techniques and was able to offer ‘tag’ for persons. Likewise, Flickr used pictures for its training to recognize them via machines. There is description of Long tail graph with many good examples, simple though and that might give an idea on how anyone can consider simple failure possibilities into avoiding catastrophes. This is basically a figuring graph that talks about the likelihood of things that may appear while performing (i.e. if you want to train a driver-less car you have to mention what all it can come across, like more traffic lights and less often a lion on the street).

# Images with blurry background predicting animals and Camera face determination seeing blinking Asians or racially denigrating species have been some output by the machines, particularly with Deep Neural network.

# Ethical AI section deals a lot with the ethics a machine might not ‘know’. If you have asked a driver-less car to take you home it cannot determine if it really knows what it need to know. It might warrant a misuse of the car. Google’s DeepMind has thus postulated a lot for the beneficial ethics of face recognition (similar to FB asking to tag you).

 #  Fundamental rules of Robotics by Asimov get mentioned and it is always exciting to read.

“A robot must not injure a human” and further rules make a pleasant reading. It is here that the author talks about a simple Trolley problem. There is a picture of a woman with a trolley trying to cross a road and she is engrossed over mobile while doing the cross. If an unmanned vehicle suddenly came across such a woman and if it were to avoid hitting her it would have to make a swerve in a opposite direction that may kill more than this single woman. So then should the car go ahead and kill one instead of six?

# That Steve Jobs started his career when Atari, a breakout game was assigned to him might provide solace to all the gamers who are struggling with a slow IT future. But Steve was a hard worker and he probably knew the future of the machines that might have helped him achieve his goals. The concept of Supervised learning vs Reinforcement learning is then discussed to lessen the projected dangers expected of the machines. Similarly, another stalwart gets his name mentioned: Checkers and Chess code writer Arthur Samuel of IBM who coined the word ML.

     The probability and statistics are useful when you have prediction rather than performance. Deep Blue, another network-based ML firm has made a good foray into many areas, particularly chess where every position may have 35 moves on an average. Similarly, Monte Carlo, the simulation techniques on probability, including electron position evaluation, was first used to design the atom bomb (Manhattan project) with a family of computer algorithm and so these still do a great job (including Quantum Mechanics).

# The information derived from data with any result or conclusion often takes time for the machine. If not for the processor, it would never beat humans who have now started to lag behind machines leaving the latter do all the stuff. The various conclusions drawn from a single story is discussed in many pages with interesting results that prove that AI cannot beat human intelligence. Here is the story: In a restaurant a customer orders some food which gets unfortunately charred by the cook and the waitress presents the food to the customer with an excuse but the man leaves the restaurant murmuring some words (machines cannot get that while humans can guess his dismay). The waitress' last words were “why is he so bent”?

Now this story is fed to machine with a translator and different languages interpret different views which makes the reading interesting (Word2Vec initially). But it is only the human being who can understand well that the customer went without eating! (Gracias: Neural network layer). One language interpreting machine was able to conclude the angular geometry!!! (bent).

     A new rule of the thumb probably displaces 80-20 rule for the learners to 90-10 – the first 90% of a project takes about 10% of time and the last 10% taking 90% of time.

In the end, the author puts forward the speculations around AI as the expectations associated with it are very high. There is no exact conclusion about the future of the AI and only incremental or infinitesimal changes in the technological front over a period would be to the fore. People who know coding, algorithms and data science would have tough time to train machines to get closer to the natural intelligence. And those without any data or algorithm would be left pondering in uncertainty.


#Artificial_Intelligence - A Guide for Thinking Humans - #Melanie_Mitchell - Review

As titled, the entire book is an optimal guide for all thinking humans.  The author begins the book with a sketch of the history of AI resea...