With fair amount of shared history and strong opinions, it is natural to see conflicts in the apartment society. This one was epic. It was about extending the rain water harvesting capacity. Let me spare the details and focus on real story.
Murthy uncle was furious. “What does THAT man think of himself?”
I was puzzled. He was talking about another resident, on the other side of the conflict.
“Do you know that he called me nutty uncle? Do you even know what is nutty? It means mad.”
I was quiet.
“Do you get it? An old mad man”
I said , “That is outrageous! Did he really say that? When and where?”
“I do not know. I was told that it was written on WhatsApp. I do not use smart phone at all. ”
He was still upset ,”So what! Now the whole colony thinks I am nutty.”
I was beginning to understand what happened. That man had written something about Murthy uncle on WhatsApp group of the neighborhood. That bloody Android auto-correct changed “Murthy” to “Nutty”.
“Uncle, I think it is a misunderstanding. He might have typed correctly, but it auto-corrected”
“what do you mean? Murthy was wrong and Nutty is correct !? It is not auto-correct. It is auto-wrong or something”
I apologized and tried to pull out my own smart phone.
“Let me demonstrate it”.
When I demonstrated on my phone, it didn’t auto-correct to ‘Nutty’. It auto-corrected to ‘Murphy’.
“See, you were wrong. The actual auto-correction showed a wise man Murphy. So changing to nutty was a deliberate mischief !”
Now I am lost. While you read about vague predictions on how AI could wreck the society, I am giving a real life concrete example. That of the apartment society
It is so hard to understand and explain AI – the new buzz of Artificial Intelligence and Machine Learning.
Machine Learning is a complex area. It is much worse due to the hype. It has to bear the brunt for much higher expectations.
You have an/set of intelligent algorithm(s). There are ‘models’, ‘features’ and ‘data sets’, ‘linear regression’, ‘logistic regression’, ‘neural network’. After all these, the behavior of the system depends upon ‘training set’, ‘test set’ and ‘online learning’. There is very less predictability and almost no ‘reproducibility’. The same algorithm output could differ based on training set and the sequence of training data.
Coming to NLP (Natural Language Processing), again there are specific algorithms. As the technology Review states ‘here’, ‘word distance’ tells that “Harvard” is very close to “Garbage”. Now you can imagine the results for countless other scenarios and contexts.
Hope you are able to relate to the problem. This will continue to be generic problem for all of AI and “situation awareness” including self driving cars.
AI is about intelligence and logic. Much less about ‘wisdom’. How to add wisdom to Artificial Intelligence? You need to model them as features and make the system learn as additional ‘intelligence’. There is no other way.
As long as this stays true, humans will continue to be relevant in the quest for replacing everything human. So my survival guide – try to hold on to things and skills that can not be modeled into AI. What AI would get more is intelligence. But wisdom has many more dimensions (features in AI models) than computing can practically reach? Could we somehow make sure that we stay wise, not just intelligent?
Some food for thought on AI and… how to stay relevant human 10 years down the line.