Sunday, December 22, 2024

Is Hiring Flutter and Kotlin Developers the Right Choice for Your Startup?

In the rapidly evolving tech landscape, choosing...

Top Features Every Crypto Wallet App Should Include in 2025

Blockchain technology's explosive development has changed digital...

What Makes SpotHero Clone the Perfect Solution for Parking Challenges?

Due to the increasing concentration of people...

Bill Gates explains why we shouldn’t be afraid of A.I.

TechnologyBill Gates explains why we shouldn't be afraid of A.I.

[ad_1]

US philantropist Bill Gates speaks during the Global Fund Seventh Replenishment Conference in New York on September 21, 2022. 

Mandel Ngan | AFP | Getty Images

Microsoft co-founder Bill Gates is a believer in the potential of artificial intelligence, repeating often that he believes models like the one at the heart of ChatGPT are the most important advancement in technology since the personal computer.

The technology’s emergence could lead to issues like deepfakes, biased algorithms, and cheating in school, he says, but he predicts that the problems stemming from the technology are solvable.

“One thing that’s clear from everything that has been written so far about the risks of AI — and a lot has been written—is that no one has all the answers,” Gates wrote in a blog post this week. “Another thing that’s clear to me is that the future of AI is not as grim as some people think or as rosy as others think.”

Gates broadcasting a middle-of-the-road view to AI risks could shift the debate around the technology away from doomsday scenarios towards more limited regulation addressing current risks, just as governments around the world grapple with how to regulate the technology and its potential downfalls. For example, on Tuesday, senators received a classified briefing about AI and the military.

Gates is one of the most prominent voices about artificial intelligence and its regulation. He’s also still closely affiliated with Microsoft, which has invested in OpenAI and integrated its ChatGPT into its core products including Office.

In the blog post, Gates cites how society reacted to previous advancements to make the case that humans have adapted to major changes in the past, and they will do so for AI as well.

See also  Amazon CEO explains how the company will compete in AI race

“For example, it will have a big impact on education, but so did handheld calculators a few decades ago and, more recently, allowing computers in the classroom,” Gates wrote.

Gates suggests that the kind of regulation the technology needs is “speed limits and seat belts.”

“Soon after the first automobiles were on the road, there was the first car crash. But we didn’t ban cars — we adopted speed limits, safety standards, licensing requirements, drunk-driving laws, and other rules of the road,” Gates wrote.

Gates is worried about some of the challenges arising from the adoption of the technology, including how it could change people’s jobs, and “hallucination,” or the propensity for models like ChatGPT to invent facts, documents, and people.

For example, he cites the problem of deepfakes, which use AI models to allow people to easily make fake videos that impersonate another person, and which could be used to scam people or tip elections, he writes.

But he also suspects that people will get better at identifying deepfakes, and cites deepfake detectors being developed by Intel and DARPA, a government funder. He proposes regulation that would clearly outline what kind of deepfakes are legal to make.

He also worries about the ability of AI-generated code to search for the kind of software vulnerabilities needed to hack computers, and suggests a global regulatory body modeled after the International Atomic Energy Agency.

[ad_2]

Source link

Check out our other content

Check out other tags:

Most Popular Articles