Gary Marcus on Taming Silicon Valley

Gary Marcus visited Seattle University’s Technology Ethics Initiative on November 19 for a fireside chat, moderated by director Onur Bakiner, on his latest book, Taming Silicon Valley: How We Can Ensure That AI Works for Us.

Gary Marcus visited Seattle University’s Technology Ethics Initiative on November 19 for a fireside chat, moderated by director Onur Bakiner, on his latest book, Taming Silicon Valley: How We Can Ensure That AI Works for Us. The main points of his lecture were summarized by Tabor Crary, Serafin Deleon Guerrero, Catherine Goode, Naja Johnson, Divisha Khanna, Ivonne Lares, Aryana Matsumoto, Maryjose Ortega Ortiz, Naomi Pettit, Carmen Ruiz-Zorrilla Garzón, Maya Walthall, Kamil Zaidi, and Zoe Zepeda Garcia - students in Dr. Onur Bakiner’s Comparative Politics class.

Gary Marcus’s Seattle University talk was a sobering discussion on his extensive research on artificial intelligence (AI), and how the newfound obsession with the development of AI technologies posed some key difficulties and problems. Marcus’s early research addressed children’s language development. Because he has a background in language development, he was really interested in language development in AI. 

Marcus started this lecture arguing that AI has historically helped development. Natural Language Processing was greatly enhanced by the ground-breaking transformer design unveiled in 2017, which allowed AI systems to carry out increasingly complex tasks. What Marcus does not do is buy into the alchemy at play: far from threatening significant roles or changing what it means to be human, the substantial developments in language model architecture have not impressed him. Many businesses nowadays use AI despite the risks. One example is using AI for evaluating candidates for a job, which might cause a drawback in discrimination by perpetuating biases. Large language models (LLMs) are better than predecessors but still present issues such as lack of reliability. These models copy the words humans use and try to predict what will come next, but they are unable to judge, fact-check, and use reasoning like humans do. For example, LLMs do not understand certain commands that they are given, such as not to use copyrighted material.

One of Marcus’s most significant concerns is that people often do not double-check the information AI gives us. This is especially troubling because AI seems so confident that people might take them at face value without realizing they’re reading something completely fake. Its lack of understanding amplifies existing problems, too: Marcus told an interesting story about how when he was in Germany, he and a friend experimented with Oculus, an AI image generator, which simply created an image of Mario when requested to draw an Italian plumber, highlighting the peculiarities and cultural biases of these systems.

He also talked about AI companies and the false promises they make. They strive to give an illusion to the naive that these models are more sophisticated and intelligent than they really are. The profit motive led to a rush to churn out any and all forms of AI technology, resulting in unreliable technology being released to the public. These companies keep promising great improvements but the truth is, as Marcus clearly stated, "deep learning is hitting a wall". The clearest example is GPT-5, a promise that hasn't been fulfilled so far. AI businesses make these statements because, if people get excited and believe in their words, they make more money. 

Marcus had specific insights into academic use, as well. He recommended that faculty let students use AI so that students can make sure what is good and not good about AI. Yet, he also noted AI’s detrimental impact on academia through plagiarism and the disruption of the learning process.

The Q & A portion at the end of the talk was also insightful. The topic of the addictiveness of AI tools was discussed, as many of them adopt human-like characteristics, such as having the results of an AI query be presented as if someone was typing it out, instead of just presenting the results at once. Another topic discussed was the importance of having AI literacy be widespread among society, especially in career fields where knowledge of the limitations of AI aren’t widely known or focused on. In that same vein, the final question centered around the fact that even though AI will result in major shifts in parts of society, such as potential career fields, there still is uncertainty regarding what those shifts will be. 

To conclude, Marcus made it clear he saw AI as a helpful tool among others as long as people understand its limitations and when it is convenient to use it.

Crary et al.

November 25, 2024