Machine learning and artificial intelligence have a present and future impact on nearly any aspect of our business and personal lives. There is also a level of exasperation about the ML topic, as there often is no clear and shared understanding of what reasonable expectations for ML/AI are and how success can be measured.
Everyone needs a basic understanding of ML/AI. Software developers staring down an organization’s ML strategy must cut through the marketing diversions of software, hardware and services vendors. We all need to understand what ML and AI are and, as importantly, what they are not and what they cannot deliver today. While ML/AI requires complex algorithms to work, it is not difficult to understand and apply those algorithms. We can take advantage of what exists in ML/AI today and plan for more exciting things in the near future.
The state of ML
ML/AI leverages mathematical algorithms to translate a set of input variables into concrete predictions. The machine learning model can make valid decisions based on input variables it has never seen before. The self-driving car is a simple example. The self-driving car will avoid hitting pedestrians in any situation, no matter what pedestrians look like, what they are wearing, how fast they are moving, how tall they are, how loudly they talk or whether or not they carry a bag, sit in a wheelchair or wear a hat. The car will also avoid pedestrians, no matter whether there is rain or snow, how many lanes there are, if there is a sidewalk, whether it is light or dark or any other situational parameters.
But how do you train a machine learning model that requires such a wide range of capabilities and must not make any costly mistakes? This is the crux of the problem. To train a machine to drive a car took a multibillion-dollar investment that did not result in an AI model that applies to other problems. In other words, to train the ML model to be able to drive a car was a long process of solving separate challenges, from the pedestrian safety rules to how it decides on speed when a sign isn’t visible. Many of these situations have more than one dimension and significant legal implications if addressed incorrectly. For example, the self-driving car should brake for a dog in many cases, unless that action would endanger a human life. How far does our machine learning model weigh its options? Will it accept a small fender bender to save the dog? But what if that accident happens to a truck pulling a large camper in snowy conditions on a bridge? Should the autonomous vehicle use the sidewalk for an evasion if it is fully certain that no pedestrians are present?
Tags: artificial intelligence, Machine learning, ML/AI
Category: Post Category