Al as normal technology is a worldview that stands in contrast to the worldview of Al as impending superintelligence. Worldviews are constituted by their assumptions, vocabu-lary, interpretations of evidence, epistemic tools, predictions, and (possibly) values. These factors reinforce each other and form a tight bundle within each worldview. For example, we assume that, despite the obvious differences between Al and past tech-nologies, they are sufficiently similar that we should expect well-established patterns, such as diffusion theory to apply to Al, in the absence of specific evidence to the contrary.
These limits are often enforced through regulation, such as the FDA's supervision of medical devices, as well as newer legislation such as the EU Al Act, which puts strict requirements on high-risk AI.° In fact, there are (credible) concerns that existing regulation of high-risk Al is so onerous that it may lead to "runaway bureaucracy" " Thus, we predict that slow diffusion will continue to be the norm in high-consequence tasks. At any rate, as and when new areas arise in which AI can be used in highly consequential ways, we can and must regulate them. A good example is the Flash Crash of 2010, in which automated high-frequency trading is thought to have played a part. This led to new curbs on trading, such as circuit breakers.12
For example, while GPT-4 reportedly achieved scores in the top 10% of bar exam test takers, this tells us remarkably little about Al's ability to practice law. The bar exam overemphasizes subject-matter knowledge and under-emphasizes real-world skills that are far harder to measure in a standardized, computer-administered format. In other words, it emphasizes precisely what language models are good at-retrieving and applying memorized infor-mation.
With past general-purpose technologies such as electricity, computers, and the internet, the respective feedback loops unfolded over several decades, and we should expect the same to happen with Al as well.
Although ideas incrementally accrue at increasing rates, are they turning over established ones? The transformer architecture has been the dominant paradigm for most of the last decade, despite its well-known limitations. By analyzing ove a billion citations in 241 subjects, Johan S.G. Chu & James A. Evans showed that, in fields in which the volume of papers is higher, it is harder, not easier, for new ideas to break through. This leads to an "ossification of canon."34 Perhaps this description applies to the current state of AI methods research.
because of the assumption that any system that passed it would be humanlike in important ways, and that we would be able to use such a system to automate a variety of complex tasks. Now that large language models can arguably pass it while only weakly meeting the expectations behind the test, its significance has waned. 36 An analogy with mountaineering is apt. Every time we solve a benchmark (reach what we thought was the peak), we discover limitations of the benchmark (realize that we're on a 'false summit') and construct a new benchmark (set our sights on what we now think is the summit). This leads to accusations of 'moving the goalposts', but this is what we should expect given the intrinsic challenges of
The organizers of the 1956 Dartmouth conference hoped to make significant progress toward the goal through a "2-month, 10-man" effort.
On a conceptual level, intelli-gence- especially as a comparison between different species-is not well defined, let alone measurable on a one-dimensional scale.40
Use of the term "intelligence" to refer to both capability and power, essentially erasing the distinction.