[ad_1]
Llion Jones had a big role at Google, where he worked for almost 12 years. He was one of eight authors of the pivotal Transformers research paper, which is central to the latest in generative artificial intelligence.
However, like all of his co-authors, Jones has now left Google. He’s joining fellow ex-Google researcher David Ha to build a generative AI research lab in Tokyo called Sakana AI. Jones said that while he has no ill will toward Google, he realized that the company’s size was keeping him from doing the kind of work he wanted to pursue.
“It’s just a side effect of big company-itis,” Jones told CNBC in an interview. “I think the bureaucracy had built to the point where I just felt like I couldn’t get anything done.”
Jones, who studied AI in college and has a masters in advanced computer science from the University of Birmingham, is at the center of the action. The 2017 paper he helped write at Google laid out innovations that played into OpenAI’s creation of the viral chatbot ChatGPT. The T stands for Transformers, an architecture behind much of today’s frenetic generative AI activity.
“We’re kind of crazy,” Jones said. “We’re looking at nature-inspired methods to see if we can find a different way of doing things, rather than doing a huge, humongous model.” Sakana isn’t announcing any investors.
Jones became a software engineer at Google’s YouTube in 2012. According to his LinkedIn profile, he started “researching machine intelligence and natural language understanding” at Google in 2015.
Google is one of a number of large tech companies that hired hordes of researchers in recent years, some straight from universities, to construct AI models aimed at enriching their products. Over time, Jones said he encountered questions about why the software was malfunctioning and whose fault it was. He found it all to be a distraction from the research.
“Every day I would be spending my time trying to get access to resources, trying to get access to data,” Jones said.
Now, after many years building products in labs, Google is rushing to incorporate generative AI, including large language models (LLMs), into its search engine, YouTube and other products. The models can summarize information and come up with human-like responses to written questions.
In Jones’ view, Google is focusing “the entire company around this one technology,” and innovation is more challenging “because that’s quite a restrictive framework,” he said.
Ha said he and Jones have spoken with others who want to work on LLMs, but they haven’t finalized their plans.
“I would be surprised if language models were not part of the future,” said Ha, who left Google last year to be head of research at startup Stability AI. He said he doesn’t want Sakana to just be another company with an LLM.
Both Jones and Ha have unflattering things to say about OpenAI, which has brought the concept of generative AI to the mainstream but raised billions of dollars from Microsoft and other investors to do so. Ha described it as “becoming so big and a bit bureaucratic,” no different really than groups within Google.
Jones said he doesn’t think OpenAI is all that innovative. He said that for OpenAI’s two biggest successes, ChatGPT and the DALL-E service for creating images with a few words of text, the startup took research he performed at Google and applied it on a large scale, making refinements along the way but holding off on sharing the developments with the community. While OpenAI has released neither of the technologies under an open-source license, it has published papers on some of the underlying systems.
Representatives from Google and OpenAI didn’t respond to requests for comment.
Ha said Sakana has brought on a part-time researcher from academia, and the company will eventually hire more people. Asked if they’ve added any other Google employees, Ha said, “Not yet.”
WATCH: It is now time to shift attention from AI hardware to AI software: ARK Invest
[ad_2]
Source link