2024 Shows It’s Possible to Defeat AI


Almost all of the big AI news this year has been about how fast the technology is advancing, the damage it’s causing, and speculation about how quickly it will grow beyond the point where it can be controlled by humans. man. But 2024 also sees governments making significant inroads into regulating algorithmic systems. Here’s a breakdown of the most important AI legislation and regulatory efforts from the past year at the state, federal, and international levels.

state

US state lawmakers are leading AI regulation in 2024, introducing hundreds of bills—some have modest goals such as creating study committees, while others will impose serious civil liability on AI developers if their creations cause catastrophic damage to society. The majority of bills fail to pass, but many states have enacted meaningful legislation that could serve as models for other states or Congress (assuming Congress gets to act again).

As AI slop floods social media ahead of the election, politicians in both parties are backing anti-deepfake laws. More than 20 states now there are restrictions against deceptive AI-generated political advertisements in the weeks before elections. Bills aimed at curbing AI-generated pornography, particularly images of minors, also received strong bipartisan support in states including Alabama, California, Indiana, North Carolina, and South Dakota.

Not surprisingly, since this is the home of the tech industry, some of the most ambitious AI proposals have come out of California. A high-profile bill could force AI developers to take safety precautions and hold companies accountable for damages caused by their systems. The bill was passed by both bodies of the legislature amid a fierce lobbying effort but was was ultimately vetoed said Governor Gavin Newsom.

Newsom, however, signed more than a dozen other fees aimed at less apocalyptic but quicker AI damage. A new California law requires health insurers to ensure that the AI ​​systems they use to make coverage determinations are fair and equitable. Another requires generative AI developers to create tools that label content as AI-generated. And a pair of bills prohibits the distribution of AI-generated likenesses of a dead person without consent and mandates that agreements for living human AI-generated likenesses must be clearly specifies how the content is to be used.

Colorado passed a first of its kind in US law requires companies that develop and use AI systems to take reasonable steps to ensure that the tools are non-discriminatory. Consumer advocates call the legislation an important baseline. It is likely that similar bills will be hotly debated in other states in 2025.

And, a middle finger to our future robot masters and the planet, Utah made a law which prohibits any government agency from granting legal personhood to artificial intelligence, inanimate objects, bodies of water, atmospheric gases, weather, plants, and other non-human objects.

FEDERAL

Congress talked a lot about AI in 2024, and the House ended the year by releasing a 273-page bipartisan report outlines guiding principles and recommendations for future regulation. But when it comes to actually passing legislation, federal lawmakers have done little.

Federal agencies, on the other hand, are busy all year round trying to meet the goals outlined in President Joe Biden’s 2023 executive order on AI. And many regulators, notably the Federal Trade Commission and Department of Justice, are cracking down on misleading and harmful AI systems.

The work agencies are doing to implement the AI ​​executive order isn’t particularly sexy or headline-grabbing, but it’s laying important foundations for managing public and private AI systems in the future. For example, federal agencies have embarked on an AI-talent hiring spree and created standards for responsible model development and harm reduction.

And, in a big step toward increasing public understanding of how the government uses AI, the Office of Management and Budget is fighting (most of) its fellow agencies to disclose critical information about the AI ​​systems they use that may affect people’s rights and safety.

On the enforcement side, the FTC’s Operation AI Comply targeted companies that use AI in fraudulent ways, such as writing fake reviews or giving legal advice, and it allowed AI-gun detection company Evolv has been accused of making false claims about its product’s capabilities. The agency too SETTLED an investigation into facial recognition company IntelliVision, which it accused of falsely claiming its technology was free of racial and gender bias, and BANNED the pharmacy chain Rite Aid from using facial recognition for five years after an investigation determined that the company used the tools to discriminate against shoppers.

The DOJ, meanwhile, joined state attorneys general in a lawsuit accusing the real estate software company Real Page on a great algorithmic method of price fixing which raises rents across the country. It has also won several antitrust cases against Google, including one involving the company monopoly on internet searches which could change the balance of power in the emerging AI search industry.

Whole world

In August, the European Union’s AI Act became an effect. The law, which already serves as a model for other jurisdictions, requires AI systems that perform high-risk functions, such as assisting in hiring or medical decisions, to lower risk and -ot some standards around data quality in training and human management. It also prohibits the use of other AI systems, such as algorithms that can be used to assign residents of a country social markers used to deny rights and privileges.

In September, China issued a major AI safety oversight BUILDING. Like similar frameworks published by the US National Institute of Standards and Technology, it is non-binding but creates a common set of standards for AI developers to follow when identifying and mitigating risks in their systems.

One of the most interesting pieces of AI policy law from Brazil. In late 2024, the country’s senate passed a comprehensive AI safety bill. It faces a challenging road ahead, but if passed, it would create an unprecedented set of protections for the types of copyrighted material commonly used to train generative AI systems. Developers must disclose what copyrighted material is included in their training data, and creators have the power to prohibit the use of their work for training AI systems or negotiate compensation agreements based on, part, to the extent of the AI ​​developer and how to use the material.

Like the EU’s AI Act, Brazil’s proposed law would also require risky AI systems to follow certain safety protocols.



Source link

  • Related Posts

    Amazon restocked by Dyson V8, in an unbearable little price all the time

    Having a Cordless Stick Vacuum is a game game for modern house … but if it is strong enough to manage daily goods. A weak cleaning can be easily crowded…

    Taking Cathy Gao to the true Playbook of TC all stage

    Growing up-stage progress is to pick up the speed, but it’s not the same game that was before. on TechCrunch all stage 2025 On July 15 in Boston, we carry…

    Leave a Reply

    Your email address will not be published. Required fields are marked *