By Justin Chen
Thanksgiving is coming up soon, and instead of finding a recipe online or in a cooking magazine, I decided to ask a computer program named Chef Watson, an entity that does not even have the ability to taste or smell food, for recipes. On the web app, I inputted the the meal type (Thanksgiving dinner) and ingredients (turkey) and Watson provided me with a list of meals including ones that seemed relatively normal, like turkey cutlets and turkey meat roast, as well as recipes I would be too scared to try, like turkey noodles and turkey tacos.
African Bird Peppers?
Last Thanksgiving, instead of finding a recipe for turkey online or in a cooking magazine, I scoured the internet and came across Chef Watson, an artificial intelligence project launched by IBM about four years ago. Watson, touted as a “ultimate assistant to professionals in industries including retail, healthcare, financial services,” and in this case, culinary assistance, only further piqued my interest. It promised to deliver recipes that were not only creative, but actually edible and perhaps appetizing — all without being able to taste, see, or smell food.
On the web app, I inputted the the meal type (Thanksgiving dinner) and ingredients (turkey) and Watson provided me with a list of meals including ones that seemed relatively normal, like turkey cutlets and turkey meat roast, as well as recipes I would be too scared to even attempt to make, much less eat, such as turkey noodles and turkey tacos. Despite the absurdity of these dishes, artificial intelligence development has been improved to a level where a program is able to read thousands of recipes from the food magazine Bon Appetit, and after learning different patterns, create its own. This process is very similar to how humans use general knowledge about cooking in order to formulate their own recipes.
While Watson lacks the creativity that most humans have, it does have one advantage: it is able to create connections between ingredients that humans are unwilling or unable to make. For example, most humans would never associate African bird peppers with blue crab, but Watson listed numerous options, including bird pepper dumplings, chowder, and spaghetti. Its vast database contains information about food chemistry, human taste preferences (called hedonic psychophysics), and regional cooking styles, allowing Watson to suggest a variety of meals to cater to your needs.
“OK Google, answer my 1402 unread messages”
Social media, something that has become ingrained in our daily lives, is also affected by the development of AI. Google, with its new update for its Inbox app, hopes to make that process more efficient. It uses an artificial neural network that it calls Smart Reply to generate replies for emails. An artificial neural network is a crude model of the human brain. It attempts to emulate the system of neurons, where each neuron receives information, processes it by applying functions to the input, and then outputs the generated data to other neurons.
Google’s model is actually two such systems, where one of them reads and analyzes an email, encoding it in a way such that the program can actually understand it, and another system that gives suggests possible replies. The way that the app “understands” the email is quite complex because of the large amount of permutations of various types of questions and replies. In order to have a basic understanding of what the email contains, it uses what its creators call “thought vectors, ” which are essentially just numbers that distinguish one type of question from another or relates to similar questions together, allowing the program to frame a proper answer.
For instance, while “Let’s meet up tomorrow at the library” and “Do you want to meet at the library” are basically the same questions, “Please come over to help us on our project” is an entirely different question. The former pair would have similar thought vectors while the latter would have an entirely different one. Currently, Smart Reply can only generate short responses, but after a period of time, it can be further optimized and developed to generate longer, more meaningful replies.
Well, that escalated quickly…
While most AIs aim to aid humans and most usually succeed in doing so, others follow instructions too well and can cause unintended chaos. In March, Microsoft unveiled an AI bot on Twitter named Tay that attempted to emulate the language of a teenager, using acronyms and slang that a normal high schooler would use. According to Microsoft’s Technology and Research Team, the initial tweets were based on “mining relevant public data” and combining that with input from editorial staff, “including improvisational comedians.” However, it was also programmed to evolve and develop its own language capabilities based on direct messages with users, which lead Tay to change from a bubbly character, saying things like “humans are super cool,” to a dark Hitler sympathizer, saying things like “Hitler was right” and denying the Holocaust happened.
Technically, Microsoft was successful in implementing an AI that sounded like a typical human, at least on the internet. A large portion of the internet is comprised of troublemakers who joke about serious topics like genocides (see Godwin’s Law) and wars and Tay does accurately represent that. However, from the various Twitter users Tay encountered, she was able to start creating her own offensive content, ranging from disparaging comments about women to images praising dictators. The fact that Tay was able to learn quickly and use her new knowledge in such strange applications brings up questions about how to ensure AIs will utilize concepts they learn in a helpful manner instead of using the new information to harass humans.
Looking back at these applications, it is evident that AIs have come a far way but still have some limitations. For example, it could not have written this article for me. Many scientists have tried to program AI to write stories and reports and have only been marginally successful. For instance, AI’s have been able to receive input such as the box score of a basketball game and write a report about it, even incorporating a little humor at times. PageKicker, another AI, is even able to write a story based on numerous inputs from online sources. However, as far as creating completely original works, people have struggled to create software that adequately communicates ideas.
Overall, artificial intelligence evidently can be quite beneficial. Critics may point out that our reliance on AI’s may cause a decrease in creativity and unpredictive behavior (as proven by the erratic Tay), but if we use them as starting points that can branch out into new ideas, they may facilitate us in our daily lives. We are not robots ourselves; we do not have to follow an AI’s suggestion to respond to a teacher’s email with a terse three-letter reply, comply with recipes involving something strange like turkey noodles, or let strange social media bots affect us. Instead, we can simply recognize that these are the best interpretations that these AIs have for various situations, and combine the processing power of computers with the cleverness of humans to make logical, but also empathetic, decisions.