[ad_1]
European Union flags flutter outside the EU Commission headquarters, in Brussels, Belgium, February 1, 2023
Yves Herman | Reuters
When Gerard de Graaf moved from Europe to San Francisco almost a year ago, his job had a very different feel to it.
De Graaf, a 30-year veteran of the European Commission, was tasked with resurrecting the EU office in the Bay Area. His title is senior envoy for digital to the U.S., and since September his main job has been to help the tech industry prepare for new legislation called The Digital Services Act (DSA), which goes into effect Friday.
At the time of his arrival, the metaverse trumped artificial intelligence as the talk of the town, tech giants and emerging startups were cutting thousands of jobs, and the Nasdaq was headed for its worst year since the financial crisis in 2008.
Within de Graaf’s purview, companies including Meta, Google, Apple and Amazon have had since April to get ready for the DSA, which takes inspiration from banking regulations. They face fines of as much as 6% of annual revenue if they fail to comply with the act, which was introduced in 2020 by the EC (the executive arm of the EU) to reduce the spread of illegal content online and provide more accountability.
Coming in as an envoy, de Graaf has seen more action than he expected. In March, there was the sudden implosion of the iconic Silicon Valley Bank, the second-largest bank failure in U.S. history. At the same time, OpenAI’s ChatGPT service, launched late last year, was setting off an arms race in generative AI, with tech money pouring into new chatbots and the large language models (LLMs) powering them.
It was a “strange year in many, many ways,” de Graaf said, from his office, which is co-located with the Irish Consulate on the 23rd floor of a building in downtown San Francisco. The European Union hasn’t had a formal presence in Silicon Valley since the 1990s.
De Graaf spent much of his time meeting with top executives, policy teams and technologists at the major tech companies to discuss regulations, the impact of generative AI and competition. Although regulations are enforced by the EC in Brussels, the new outpost has been a useful way to foster a better relationship between the U.S. tech sector and the EU, de Graaf said.
“I think there’s been a conversation that we needed to have that did not really take place,” said de Graaf. With a hint of sarcasm, de Graaf said that somebody with “infinite wisdom” decided the EU should step back from the region during the internet boom, right “when Silicon Valley was taking off and going from strength to strength.”
The thinking at the time within the tech industry, he said, was that the internet is a “different technology that moves very fast” and that “policymakers don’t understand it and can’t regulate it.”
Facebook Chairman and CEO Mark Zuckerberg arrives to testify before the House Financial Services Committee on “An Examination of Facebook and Its Impact on the Financial Services and Housing Sectors” in the Rayburn House Office Building in Washington, DC on October 23, 2019.
Mandel Ngan | AFP | Getty Images
However, some major leaders in tech have shown signs that they’re taking the DSA seriously, de Graaf said. He noted that Meta CEO Mark Zuckerberg met with Thierry Breton, the EU commissioner for internal market, to go over some of the specifics of the rules, and that X owner Elon Musk has publicly supported the DSA after meeting with Breton.
De Graaf said he’s seeing “a bit more respect and understanding for the European Union’s position, and I think that has accelerated after generative AI.”
‘Serious commitment’
X, formerly known as Twitter, had withdrawn from the EU’s voluntary guidelines for countering disinformation. There was no penalty for not participating, but X must now comply with the DSA, and Breton said after his meeting with Musk that “fighting disinformation will be a legal obligation.”
“I think, in general, we’ve seen a serious commitment of big companies also in Europe and around the world to be prepared and to prepare themselves,” de Graaf said.
The new rules require platforms with at least 45 million monthly active users in the EU to provide risk assessment and mitigation plans. They also must allow for certain researchers to have inspection access to their services for harms and provide more transparency to users about their recommendation systems, even allowing people to tweak their settings.
Timing could be a challenge. As part of their cost-cutting measures implemented early this year, many companies laid off members of their trust and safety teams.
“You ask yourself the question, will these companies still have the capacity to implement these new regulations?” de Graaf said. “We’ve been assured by many of them that in the process of layoffs, they have a renewed sense of trust and safety.”
The DSA doesn’t require that tech companies maintain a certain number of trust and safety workers, de Graaf said, just that they comply with the law. Still, he said one social media platform that he declined to name gave an answer “that was not entirely reassuring” when asked how it plans to monitor for disinformation in Poland during the upcoming October elections, as the company has only one person in the region.
That’s why the rules include transparency about what exactly the platforms are doing.
“There’s a lot we don’t know, like how these companies moderate content,” de Graaf said. “And not just their resources, but also how their decisions are made with which content will stay and which content is taken down.”
De Graaf, a Dutchman who’s married with two kids, has spent the past three decades going deep on regulatory issues for the EC. He previously worked on the Digital Services Act and Digital Markets Act, European legislation targeted at consumer protection and rights and enhancing competition.
This isn’t his first stint in the U.S. From 1997 to 2001, he worked in Washington, D.C., as “trade counsellor at the European Commission’s Delegation to the United States,” according to his bio.
For all the talk about San Francisco’s “doom loop,” de Graaf said he sees a different level of energy in the city as well as further south in Silicon Valley.
There’s still “so much dynamism” in San Francisco, he said, adding that it’s filled with “such interesting people and objective people that I find incredibly refreshing.”
“I meet very, very interesting people here in Silicon Valley and in San Francisco,” he said. “And it’s not just the companies that are kind of avant-garde as the people behind them, so the conversations you have here with people are really rewarding.”
The generative AI boom
Generative AI was a virtually foreign concept when de Graaf arrived in San Francisco last September. Now, it’s about the only topic of conversation at tech conferences and cocktail parties.
The rise and rapid spread of generative AI has led to a number of big tech companies and high-profile executives calling for regulations, citing the technology’s potential influence on society and the economy. In June, the European Parliament cleared a major step in passing the EU AI Act, which would represent the EU’s package of AI regulations. It’s still a long way from becoming law.
De Graaf noted the irony in the industry’s attitude. Tech companies that have for years criticized the EU for overly aggressive regulations are now asking, “Why is it taking you so long?” de Graaf said.
“We will hopefully have an agreement on the text by the end of this year,” he said. “And then we always have these transitional periods where the industry needs to prepare, and we need to prepare. That might be two years or a year and a half.”
The rapidly changing landscape of generative AI makes it tricky for the EU to quickly formulate regulations.
“Six months ago, I think our big concern was to legislate the handful of companies — the extremely powerful, resource rich companies — that are going to dominate,” de Graaf said.
But as more powerful LLMs become available for people to use for free, the technology is spreading, making regulation more challenging as it’s not just about dealing with a few big companies. De Graaf has been meeting with local universities like Stanford to learn about transparency into the LLMs, how researchers can access the technology and what kind of data companies could provide to lawmakers about their software.
One proposal being floated in Europe is the idea of publicly funded AI models, so control isn’t all in the hands of big U.S. companies.
“These are questions that policymakers in the U.S. and all around the world are asking themselves,” de Graaf said. “We don’t have a crystal ball where we can just predict everything that’s happening.”
Even if there are ways to expand how AI models are developed, there’s little doubt about where the money is flowing for processing power. Nvidia, which just reported blowout earnings for the latest quarter and has seen its stock price triple in value this year, is by far the leader in providing the kind of chips needed to power generative AI systems.
“That company, they have a unique value proposition,” de Graaf said. “It’s unique not because of scale or a network effect, but because their technology is so advanced that it has no competition.”
He said that his team meets “quite regularly” with Nvidia and its policy team and they’ve been learning “how the semiconductor market is evolving.”
“That’s a useful source information for us, and of course, where the technology is going,” de Graaf said. “They know where a lot of the industries are stepping up and are on the ball or are going to move more quickly than other industries.”
WATCH: Former White House CTO Aneesh Chopra on A.I. regulation
[ad_2]
Source link