Update 2016-12-08: Comment by Alexander and my (long) reponse
John Naughton from The Guardian wrote an article with his opinion on Machine Learning. It's not that long and worth reading. Please do so.
To my understanding, he notes that due to biased input-data, machine learning does not provide correct results. Therefore, we should not be worried when facebook, Amazon, Apple, Google, and others are using Machine Learning techniques.
My conclusion is the absolute opposite.
Machine Learning is often used as a synonym for Artificial Intelligence (AI). They are not the same. Simplified, Machine Learning is a kind of Artificial Intelligence. But let's not dive into semantics. For this discussion here, the differences are not that important.
Basically, the quality of the output of an algorithm depends on the input data (including training data), the algorithm, and the configuration (parameters) of the algorithm.
If the input data is biased as mentioned in the Guardian article, the output is arbitrary. This is true.
When it comes to interpret human behavior or any human-related data, algorithms will never be as «good» as a human. And not even all humans are qualified to interpret human-related data.
For example, you are not qualified to interpret human-related data from a totally different cultural back-ground. A typical European is irritated by totally «normal» human behavior in Japan. Even the typical American person is irritated by European opinions, e.g., on public health care although we share most cultural background via media. Everybody is able to come up with a very long list of examples here.
And now we out-source this hard task to algorithms, written by humans being far from error-prone. You don't need to be an expert to see that this is not going to lead to perfect results.
Therefore, in my opinion, non-trivial human-related data should not be interpreted or judged by something other than a qualified human.
In contrast to John Naughton, I am not calmed due to the fact that cloud companies are using this biased input to generate arbitrary output.
Quite the opposite: I am absolutely horrified!
I am horrified because conclusions derived from my data are being used despite the fact that they are arbitrary and more or less plain wrong.
Cloud companies are either using this output data to provide a «personalized user experience». With wrong assumptions, product recommendations gets weird, contact recommendations of social networks get funny, and people recognized on photographs are mixed up.
However, this is not the big problem here.
Way more worrisome: cloud companies are selling false conclusions to other companies which take it as their input data for their algorithms.
We now have arbitrary wrong input data which is put into a different algorithm - the algorithm of the company who bought the data - that derives an even worse output.
This is common practice. This is not some conspiracy theory. This totally affects you.
Insurance companies are buying data from cloud companies to determine whether or not you can get an insurance and if, how much you have to pay. Good luck, when you've got the «wrong» connections to high-risk people on any social network.
Banks are buying, generating, and using data on a grand scale to decide if you are going to get a loan. And the interest rate is highly depending on this data.
Companies are doing background checks on job candidates.
There are more and more types of businesses that rely on such data. Big data is an enormous hype that is not going away for the next decades. This issue is out of control already.
You are not even going to recognize when you have been categorized wrongly. You have no possibility to verify, falsify, or correct data about you. You are no longer master of your digital image companies or governments maintain for you.
Oh boy, what a huge problem, we got here.
Learn how giving away data to the cloud is affecting you indirectly. Learn how to be more strict on giving away data. Learn to accept certain inconveniences in order to keep your privacy. Learn from other people. Ask for help. Share knowledge.
Follow my blog. I write about privacy and how I try to keep my digital dignity. ;-)
Alexander sent me his comments via email. I'm glad to reply to his thoughts. However, it turned out to be a long reply. I also added some additional notions on the dangerous developments of our society in general. Therefore, I replied with a separate blog article. You might want to read it as well.