Connect with us

Hi, what are you looking for?

Tech & Science

Q&A: The techniques powering AI are a Monkey’s Paw – be careful (Includes interview)

Human tendency is to ask “what” before “why” when solving for a problem or creating a solution. However, specific to machine cognition, the why is more important than the what in order to ensure that the reasoning and mission behind any efforts are rooted in intention and are not gone into blindly.

According to Daniel Blackburn, data scientist at Globant, keeping the “why” in mind is paramount to the success of a project – guiding the team to build with intention and constraining unintended consequences that come from machines learning on their own. Blackburn explains ‘how’ in our special interview.

Digital Journal: How advanced is AI becoming?

Daniel Blackburn: Machines have been “smart” in a narrow sense for a long time. Pocket calculators have been around since the 1970’s, and to this day, are faster than us and perfectly accurate. In 1997, Deep Blue demonstrated that computers can beat us at our own games. However, in the last decade, advances in machine learning have made AI pervasive in everyday life.

Take the example of autonomous cars. Today, smart vehicles can maintain their lanes on highways and be summoned in parking lots. Cars aren’t yet driving themselves from pickup to destination, but I wouldn’t bet against it becoming available on the market in the next decade, given the advancements we’re seeing. The question, now, is how might society change when a workforce of millions is no longer needed in our transportation system?

DJ: Is AI true ‘intelligence’?

Blackburn: There’s an underlying question of “what is intelligence?” Defining intelligence has become a moving goalpost, but at a high level, intelligence depends on what we consider an intelligent being and is based on anything that intelligent being can do. This “intelligent being” doesn’t need to “understand” in a deep sense, but does need to operate on that premise.

In that regard, machines are growing increasingly intelligent. This is borne out in every field of cognition; to name a few: game playing agents (e.g. chess), computer vision, natural language understanding and generation, automatic speech recognition, and autonomous vehicles. Some will argue that AI models that have been built up to this point have intelligence in a narrow sense. The AI models can do one thing well, but humans can do many things well. However, I believe as AI models become more mature, we’ll see more software developers using them like Lego Bricks as a component to their larger system. The system will have executive function, a brain, that decides which AI models to use when facing a particular problem. The brain may be a set of man-made rules, or it may be another AI model that is itself a narrow intelligence. In either case, the assembled product starts to solve a variety of problems, utilizing a variety of techniques. These “intelligent tools” will complement humans. In the workplace, they will allow us to focus our attention on new, challenging, unsolved problems.

DJ: What are the risks with AI on its currently developmental trajectory?

Blackburn:One risk is people using AI to spread misinformation. Many techniques for image and text generation are publicly available today and need only be refined for nefarious purposes.

Deepfakes are in the public’s periphery right now. Such techniques will eventually enter the forefront of the public’s consciousness. I hope they enter our consciousness through a series of ethical demonstrations of their capabilities, such as Jordan Peele impersonating Obama. A frightening alternative would be their use to affect public opinion of an election, with broad awareness of the problem coming after the polls have closed.

Text generation is another avenue of attack on our “social order.” Bots can generate huge variety of passable text around a theme. Such messages can be so diverse that at least some will pass as intelligible human-generated text. If these messages are submitted en masse to public forums, what use will the forums be for enabling humans to share ideas between one another? Machines have the opportunity to make fringe issues appear mainstream or imply the existence of a “silent majority” around an issue. As these bots grow more common, we may need to develop new tools for distinguishing an individual’s thought from an automatically generated message. As with video forgery, we will need to collectively learn to distinguish human-generated text from computer-generated.

An additional risk is autonomous decision-making. Today, data-driven models are used extensively to connect the right person with the right advertisement, resulting in higher conversion rates of potential customers. It makes sense for advertisers to target their advertisements. But, marketers need to consider the ethical implications of doing this to avoid bias.

Businesses with AI models need to consider the ethical implications of their use. Businesses should be transparent by explicitly saying which product elements are informed by machine-learned models. The models should be explainable, allowing the user to understand what features were used to make the decision. Sometimes when pressed on these issues, executives will make the point that data rights are being maintained as required by relevant laws. Product designers and developers should ask themselves whether the laws are sufficient for ethical product use. Oftentimes, the law falls short of respecting the data rights of customers. By mandating a higher standard, a business can quietly improve the lives of their customers.

Alter 3 is part of a new Artificial Life (Alife) research project. The goal is to explore the future...

Alter 3 is part of a new Artificial Life (Alife) research project. The goal is to explore the future of human communication.

DJ: What are the risks of AI having unintended consequences?

Blackburn:Suppose a data scientist wants to predict the market price of an apartment in Seattle. The data scientist builds a model that assumes the price of an apartment rental is a base rate plus a cost that’s proportional to the floor area. Written out, the pricing model looks like this: Price = (Base price) + (Proportionality constant) x (Floor area in square meters)

The data scientist collects data on historical rental prices in Seattle and uses this to train the model. Through training, the model learns the “Base price” and “Proportionality constant” that minimizes the error in prediction. However, training a model is not the end. The data scientist needs to understand the model’s biases and fix them. With this approach, biases related to metro area, neighborhood components (school zones, crime rates, etc.), cost of living and properties related to each individual apartment unit may exist.

One way to evaluate the biases are to break the apartment buildings into subgroups and evaluate if the model performs the same in each group. For example, the data scientist may consider apartments in neighborhoods with above-average crime and below-average crime. It may be that apartments in areas with above-average crime have a lower price, below-average crime have a higher price.

It grows increasingly difficult to measure correlated biases. Suppose the model’s error is especially large for the five examples in data that are located in an affluent neighborhood, in a school zone, with a two-car garage, during a recession. Does that indicate that the model is biased or that the data varied far from the mean? Given the data, it may be impossible to know and beyond that, challenging to gather more data.

If this model contained a naive hypothesis, such as was proposed where rental costs grow in proportion with apartment floor space and nothing else is considered, that constrains the predictions the model can make. The risk is that, now knowing what the prediction will be used for, the assumptions may drive unanticipated results. The data scientist has all the risks of traditional software development, such as bugs in the code. On top of that is an additional blind spot, insofar as the model is finely tuned to data.

To highlight the risks of model building without a why, we can determine how our apartment rental model may fail. In many cities, ethnicities are distributed unevenly between neighborhoods. What if your model is used by landlords to inform the market price of rental properties, and it unfairly drives rental prices higher among marginalized populations? This is a central risk in machine learning. A data scientist defined a model to predict rental prices with no particular use in mind. The model was blindly trained to data and released, with no consideration to how it’s used and what decisions it may ultimately inform. When the stakes are known from the start, the product can be designed with the right assumptions.

A robot dog  designed by the Sony Corporation. On show at the Barbican AI exhibition in London.

A robot dog, designed by the Sony Corporation. On show at the Barbican AI exhibition in London.

DJ: Why do you recommend keeping in mind ‘why’ when developing AI?

Blackburn:Organizations should be aware of the social consequences that may result from failed AI models when developing their approach. An example of a failed AI model is the Amazon Recognition system. Researchers found that Amazon’s image recognition models did markedly worse on subjects who belonged to multiple minority groups, namely females with dark skin complexion.

From this example, it’s clear that it is critical for data scientists to continue defining broad characteristics of their AI models and subsequently train their models. After models have been trained, the “why” informs what tests are performed to determine if a model is well-behaved or pathological. The “why” tells us what to test and how to test it. We know what the model must accomplish to provide value to the user and what the consequences are for failing to perform sufficiently well.

DJ: What are the consequences of developers keeping focusing on the ‘what’ instead of the ‘why’?

Blackburn:When a data scientist builds a product with a focus on the “what” rather than the why, a poor user experience emerges. Tech demos shouldn’t be faulted for unbridled innovation, but a user-focused experience demands more.

There are many examples of product developments that focus on the “what” rather than the “why.” Among them, a central theme emerges where the technologies’ potential stands far above the experience for the typical user.

One prime example is how some developers create chatbots. The “why” for chat is to provide a low-friction way for customers to communicate with a digital business. The “what” of a chatbot is to push the advantages of chat to its logical limit. Customers can get consistent, immediate responses to their questions. The operational cost is a fixed cost that scales better to a large customer base.

In practice, the huge promise of chatbots is rarely achieved. If you have interacted with a business’s chatbot in the past, I’d wager you either had a simple request which it could handle successfully, or you were disappointed and frustrated by the experience. Disappointment was likely caused by the bot’s inability to understand and respond to your intent. When the bot couldn’t handle your intents, there was likely no way to escalate your situation to a human agent.

To create a better chatbot experience, the businesses should put the “why” at the center of the chatbot. Why do businesses offer customer care? Good customer care improves the brand, improves customer satisfaction, and drives customer promotion of the business’s product. Chatbots can reduce cost over a support team, but the “why” of customer care shouldn’t be forgotten.

To keep the “why” at the center, the business should identify metrics that closely track successful support outcomes. At the end of a chatbot conversation, a natural interaction that provides data is asking the user whether their issue has been resolved. This metric indicates whether the chatbot has successfully reduced the business’s cost to satisfy customer needs. When customers’ needs can’t be resolved by chatbot, a human agent should be added to the chat to assist. This provides a mechanism for the chatbot to solve common customer needs without creating a “long tail” of needs that never get addressed. Finally, the business should be organized around continuous improvement. Chatbot logs should be analyzed to ensure the bots are succeeding and to identify new skills to program into the chatbot. Taken together, this approach can reduce operational costs without reducing the quality of customer care

Avatar photo
Written By

Dr. Tim Sandle is Digital Journal's Editor-at-Large for science news. Tim specializes in science, technology, environmental, business, and health journalism. He is additionally a practising microbiologist; and an author. He is also interested in history, politics and current affairs.

You may also like:

World

An Iranian military truck carries a Sayad 4-B missile past a portrait of supreme leader Ayatollah Ali Khamenei during a military parade on April...

World

Tycoon Morris Chang received one of Taiwan's highest medals of honour to recognise his achievements as the founder of semiconductor giant TSMC - Copyright...

Business

Meta founder and CEO Mark Zuckerberg contends freshly released Meta AI is the most intelligent digital assistant people can freely use - Copyright AFP...

Tech & Science

Don’t be too surprised to see betting agencies getting involved in questions like this: “Would you like to make billions on new tech?” is...