Sunday, December 22, 2024

Is Hiring Flutter and Kotlin Developers the Right Choice for Your Startup?

In the rapidly evolving tech landscape, choosing...

Top Features Every Crypto Wallet App Should Include in 2025

Blockchain technology's explosive development has changed digital...

What Makes SpotHero Clone the Perfect Solution for Parking Challenges?

Due to the increasing concentration of people...

Google, Microsoft form new A.I. industry group to set safety standards

TechnologyGoogle, Microsoft form new A.I. industry group to set safety standards

[ad_1]

Satya Nadella, CEO of Microsoft, speaks during an interview in Redmond, Washington, March 15, 2023.

Bloomberg | Bloomberg | Getty Images

Four leading artificial intelligence companies launched a new industry group on Wednesday to identify best safety practices and promote the technology’s use toward great societal challenges.

The group underscores how, until policymakers come up with new rules, the industry will likely need to continue to police themselves.

related investing news

What Alphabet and Microsoft are saying about A.I. on their earnings calls

CNBC Pro

Anthropic, Google, Microsoft and OpenAI said the new Frontier Model Forum had four key goals, which Google outlined in a blog post:

1. Advancing AI safety research to promote responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety.

2. Identifying best practices for the responsible development and deployment of frontier models, helping the public understand the nature, capabilities, limitations, and impact of the technology.

3. Collaborating with policymakers, academics, civil society and companies to share knowledge about trust and safety risks.

4. Supporting efforts to develop applications that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.

Organizations that meet several criteria can join the group. Those include developing or deploying frontier models, or “large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks,” Google said in its blog post. They also must show a commitment to safety “through technical and institutional approaches.”

The group said in the coming months, it would create an advisory board of diverse backgrounds to guide its priorities. The founding companies will consult civil society organizations as it comes up with its governance design and funding.

See also  PayPal launches first dollar-backed stablecoin from a major U.S. financial institution

The effort comes as policymakers weigh what appropriate guardrails on the technology could look like, without hindering innovation and ceding the country’s position in the AI race.

Senate Majority Leader Chuck Schumer, D-N.Y., has been spearheading an effort to create an AI legislative framework, while many other bills tackling specific slices of the technology’s impact have already been introduced. The White House has also been hosting meetings with industry leaders and AI experts and recently announced that leading companies agreed to a voluntary pledge for developing AI safely.

Subscribe to CNBC on YouTube.

WATCH: How A.I. could impact jobs of outsourced coders in India

How A.I. could impact jobs of outsourced coders in India

[ad_2]

Source link

Check out our other content

Check out other tags:

Most Popular Articles