Unintended consequences: The results of the US election herald the reckless development of AI


Join our daily and weekly newsletters for the latest updates and exclusive content on industry leading AI coverage. Learn more


While the 2024 US election is focused on traditional issues like the economy and immigration, its quiet impact AI policy can be more transformative. Without a single question in the debate or big campaign promises about AI, voters inadvertently tip the scales in favor of accelerationists — those who advocate rapid AI development with fewer regulatory hurdles. The implications of this acceleration are profound, heralding a new era in AI policy that prioritizes innovation over caution and heralds a decisive shift in the debate between The potential risks and rewards of AI.

President-elect Donald Trump’s pro-business stance has led many to assume that his administration will favor those who develop and market AI and other advanced technologies. His party PLATFORM there is little to say about AI. However, it advocates a policy approach focused on revoking AI regulations, specifically targeting what it describes as “radical left-wing ideas” within existing mandates. executive of the outgoing administration. In contrast, the platform supports AI development aimed at promoting free speech and “human development,” calling for policies that enable AI innovation while opposing measures that are believed to be hinder technological progress.

Early indications based on the appointment of top government positions highlight this direction. However, there is a bigger story unfolding: The resolution of the intense debate The future of AI.

Great debate

Ever since ChatGPT appears in November 2022, there will be a strong debate between those in the field of AI who want to accelerate the development of AI and those who want to slow down.

Notably, in March 2023 the latter group proposed a six-month pause in AI development of the most advanced systems, warning of an open letter that AI tools present “profound risks to society and humanity.” This letter, headed by Future of Life Instituteprompted by OpenAI’s release of the GPT-4 large language model (LLM), a few months after ChatGPT was launched.

The letter was initially signed by more than 1,000 technology leaders and researchers, including Elon Musk, Apple Co-founder Steve Wozniak, 2020 Presidential candidate Andrew Yang, podcaster Lex Fridman, and AI pioneers Yoshua Bengio and Stuart Russell. The number of signatories to the letter eventually increased to over 33,000. Collectively, they are known as “doomers,” a term to capture their concerns about potential dangers from AI.

Not everyone agreed. OpenAI CEO Sam Altman did not sign. Neither did Bill Gates and many others. Their reasons for not doing so varied, though many expressed concerns about potential harm from AI. This has led to many conversations about the potential for AI to run amok, leading to disaster. It has become fashionable for many in the AI ​​field to talk about their assessment of the probability of destructionoften called the equation: p(doom). However, work on the development of AI does not stop.

For the record, my p(loss) in june 2023 is 5%. That seems low, but it’s not zero. I feel that the major AI labs are sincere in their efforts to rigorously test new models before release and to provide significant guardrails for their use.

Many observers who are concerned about the dangers of AI rate that there are risks as high as 5%, and some rate it much higher. AI safety researcher Roman Yampolskiy rates the likelihood of AI termination of the population by more than 99%. Said, a study which was released earlier this year, before the election and represents the views of more than 2,700 AI researchers, shows that “the median prediction for extreme outcomes, such as human extinction, 5%.” Would you get on a plane if there was a 5% chance it would crash? This is the problem facing AI researchers and policy makers.

It should be faster

Others have openly dismissed concerns about AI, pointing to what they see as huge technological advances. It includes Andrew Ng (who founded and leads the Google Brain project) and Pedro Domingos (a professor of computer science and engineering at the University of Washington and author of “The Master Algorithm“). They argue, however, that AI is part of the solution. As Ng stated, there are real risks, such as climate change and future pandemics, and AI can be part of how these are addressed and mitigated.

Ng argued that the development of AI should not be stopped, but should be accelerated. This utopian view of technology is echoed by others collectively known as “effective accelerationists” or “e/acc” for short. They argue that technology – and especially AI – is not the problem, but the solution to most, if not all, of the world’s issues. Startup accelerator Y Combinator CEO Garry Tan, along with other prominent Silicon Valley leaders, incorporated the term “e/acc” into their X usernames to indicate vision alignment. Correspondent Kevin Roose of the New York Times got the gist of these accelerationists by saying that they have an “all-gas, no-brakes approach.”

A Substack newsletter from a couple of years ago described the principles underlying effective acceleration. Here’s the summation they offer at the end of the article, plus a comment from OpenAI CEO Sam Altman.

AI acceleration ahead

The outcome of the 2024 election could be seen as a turning point, putting the accelerationist vision in a position to shape US AI policy in the coming years. For example, the President-elect recently appointed technology entrepreneur and venture capitalist David Sacks as “AI czar.”

Sacks, a vocal critic of AI regulation and an advocate of market-driven innovation, brings his experience as a technology investor to this role. He is one of the leading voices in the AI ​​industry, and much of what he says about AI is consistent with the accelerationist views expressed in the party’s future platform.

In response to the AI ​​executive order from the Biden administration in 2023, Sacks Tweet: “The political and fiscal situation in the US is hopelessly broken, but we have an unparalleled asset as a country: The latest AI innovation driven by a completely free and unregulated market for software development. That just ended.” While the extent of Sacks’ influence on AI policy remains to be seen, his appointment signals a shift toward policies that favor industry self-regulation and rapid innovation.

Elections have consequences

I doubt much of the voting public has given much thought to the policy implications of AI in voting. However, in a more tangible way, the accelerationists won as a result of the election, which may have pushed aside those who advocated for a more cautious approach by the federal government to ease the long-term AI risk.

As accelerationists chart the path forward, the stakes could not be higher. Whether this period brings unprecedented progress or unforeseen disaster remains to be seen. As AI advances rapidly, the need for informed public discourse and careful governance becomes increasingly important. How we navigate this era is not only the advancement of technology but also our collective future.

As a counterbalance to the lack of action at the federal level, it is possible that one or more states will adopt different regulations, which has already happened to some extent in California and Colorado. For example, California’s AI safety bills focus on transparency requirements, while Colorado addresses AI discrimination in hiring practices, offering models for state-level oversight. Now, all eyes will be on the voluntary testing and self-imposed guardrails of Anthropic, Google, OpenAI and other AI model developers.

In summary, the accelerationist win means fewer restrictions on AI innovation. This increased speed can lead to faster innovation, but also increases the risk of unintended consequences. I now change my p(destruction) to 10%. What is yours?

Gary Grossman is the EVP of the technology practice at Edelman and global leader of the Edelman AI Center of Excellence.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including technical people who do data work, can share data-related insights and innovations.

If you want to read about ideas and the latest information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You can think contributed an article on your own!

Read More From DataDecisionMakers



Source link
  • Related Posts

    ‘Phantom of Paradise’ can find new life as a stage game

    The Phantom of ParadiseThe more contaminated horror of stone opera from the thoughts of Brian de palm (Carrie) and Paul Williams (The muppet movie)can get a new music adaptation. Reports…

    4 days left to lock your TC ALL ALL PASS with $ 210 in storage

    If you are serious about traction, scale, and funds, there are 4 days left to secure your pass Techcrunch all stage – and save up to $ 210 while you…

    Leave a Reply

    Your email address will not be published. Required fields are marked *