Human-Like Implemented A.I. and Human Problem-Solving A.I.
Russ McBride

Artificial intelligence is cycling into another peak of enthusiasm right now with long-time evangelists like Kurzweil redoubling their hyperbole and Elon Musk suggesting that the odds are “a billion to one” that we are not already living in an AI simulation matrix. There is a new wave of attention within the formerly silent fields of economics, strategy, and management from those like Agrawal, Gans, and Goldfarb (2018) who see the lower prices of AI-powered “prediction” reshaping entire industries. Others more skeptical see genuine human reasoning of the kind needed to, e.g., make firm-level strategic decisions, impossible for machines to duplicate and safe from encroachment for the foreseeable future. In this paper, I look at whether we have achieved human-like implemented AI (HLI-AI), a question which requires an exploration of what cognition and intelligence fundamentally are. I then look at what would be needed to build it, and then suggest a distinction between HLI-AI and human problem solving AI (HPS-AI)—the former is what we do not currently have, the latter is implemented in a wide variety of (non human-like) techniques that solve human-relevant problems. Finally, I suggest a way forward for reasonable expectations of the role for HPS-AI in the socio-techno world.

Full Text: PDF

Copyright © 2014 - 2024 The Brooklyn Research and Publishing Institute. All Rights Reserved.
Brooklyn, NY 11210, United States