Paul Peloquin, Wichita SFTDEV, Reviews the New Ethics of Machine Learning

Photo of author

(Newswire.net — October 30, 2019) — For the past few years, the general public has been blasted with commercials, news, and blogs about the marvels of AI. In many instances, these marvels are just that: diseases are being detected earlier, archeologists are making new finds, and some with special needs are finding new independence. And developers are finding that many of the tech giants are providing new ways for them to take advantage of this emerging technology.

One such developer poised to take advantage of the new tech surrounding machine learning is Paul Peloquin of Wichita, Kansas. While he is both fascinated and excited about the potential prospects within the industry, his experience as a philosophical programmer preaches caution. Below, Mr. Peloquin discusses the new ethics of machine learning with a peek under the hood.

With the latest releases of macOS Catalina and iOS 13, developers have been provided new machine learning tools in the frameworks of Speech, Vision, and Natural Language. And for those developers not finding what they need within those frameworks, Apple has expanded the ability for developers to make their own custom Core ML models.

Never-the-less, before a developer uses these new technologies, the ethical ramifications of the particular use case should be examined. Just because one can do something, it does not necessarily follow that one should do something.

Any data scientist specializing in machine learning will tell you, the usefulness of a particular machine learning model in solving a specific problem is directly proportional to the quality of the data being used to train the model. Often times, large amounts of useful data are hard to find. Another problem involves training from large amounts of data; the more data, the longer it will take to train a machine learning model. Because of this, one of the early, yet proven tricks to address both of these areas is called transfer learning. 

One way to understand transfer learning is to look at through human experience. Say a person is presented have a physical puzzle of 1000 pieces, and it is the first time this person has done such an activity. With experimentation, this person may learn that it is easiest if he finds the edge pieces first, then group pieces of similar color and pattern together. Once he has the puzzle frame constructed, and some of the smaller groups of pieces fitted together, the rest of the puzzle starts to come together.

Now, this person is presented a different activity later in time; that of building a flagstone patio. The person is given the area where the patio is to go, and a pile of flagstones to lay down to fill the area. For ascetics, the person is told to fit them together in the area as best they can. Now, while the patio problem is different, the lessons learned through piecing the puzzle together aids the person in solving the patio problems.

Similarly, lessons learned in a neural network to solve one problem can be used to solve different problems. One area this is seen is in image classification. Machine learning programmers will “peel off” the last neural network layer of a robust image classification model, then train a new model using the prior lessons learned – like this is a corner, this is an edge, etc.

Where this becomes interesting is when ethics are considered. When a data scientist transfers machine learning, does the data scientist know any bias that the transferred learning brings with it? In terms of society, we have made it clear that racial profiling is not an ethical reason for a police officer to conduct a traffic stop – that bias is not appropriate. When we begin to turn over more and more tasks to computers, any bias in the learned model will reflect the data used to train the model. And if transfer learning is used, then unknown biases will get through.

It will be incumbent on those data scientists in such situations to look for such biases, and if found, develop ways to overcome them.

About Paul Peloquin:

Paul Peloquin of Wichita, Kansas, has been developing his skills as a progressive programmer since 1983. After obtaining a doctorate from Oklahoma City University, Mr. Peloquin first applied his abstract software development skills to the legal industry, where he successfully innovated document management and scheduling programs. Since then, he has continued to revolutionize software for several Fortune 500 companies, including General Motors. Notably, his work to improve user experience in wearable smart devices was a featured pick by Samsung editors on multiple occasions. He continues to be engaged in numerous technological consults and developments, including best data practices, network security, and technology intellectual property. Outside the office, Paul Peloquin enjoys employing thought experiments, instructing in the Swift iOS programming language, MongoDB database Software as well as Samsung development strategies, and spending time with his family.