All of a sudden,Japan Archives DeepSeek is everywhere.
Its R1 model is open source, allegedly trained for a fraction of the cost of other AI models, and is just as good, if not better than ChatGPT.
This lethal combination hit Wall Street hard, causing tech stocks to tumble, and making investors question how much money is needed to develop good AI models. DeepSeek engineers claim R1 was trained on 2,788 GPUs which cost around $6 million, compared to OpenAI's GPT-4 which reportedly cost $100 million to train.
DeepSeek's cost efficiency also challenges the idea that larger models and more data leads to better performance. Amidst the frenzied conversation about DeepSeek's capabilities, its threat to AI companies like OpenAI, and spooked investors, it can be hard to make sense of what's going on. But AI experts with veteran experience have weighed in with valuable perspectives.
Hampered by trade restrictions and access to Nvidia GPUs, China-based DeepSeek had to get creative in developing and training R1. That they were able to accomplish this feat for only $6 million (which isn't a lot of money in AI terms) was a revelation to investors.
But AI experts weren't surprised. "At Google, I asked why they were fixated on building THE LARGEST model. Why are you going for size? What function are you trying to achieve? Why is the thing you were upset about that you didn't have THE LARGEST model? They responded by firing me," posted Timnit Gebru, who was famously terminated from Google for calling out AI bias, on X.
This Tweet is currently unavailable. It might be loading or has been removed.
Hugging Face's climate and AI lead Sasha Luccioni pointed out how AI investment is precariously built on marketing and hype. "It's wild that hinting that a single (high-performing) LLM is able to achieve that performance without brute-forcing the shit out of thousands of GPUs is enough to cause this," said Luccioni.
This Tweet is currently unavailable. It might be loading or has been removed.
DeepSeek R1 performed comparably to OpenAI o1 model on key benchmarks. It marginally surpassed, equaled, or fell just below o1 on math, coding, and general knowledge tests. That's to say, there are other models out there, like Anthropic Claude, Google Gemini, and Meta's open source model Llama that are just as capable to the average user.
But R1 causing such a frenzy because of how little it cost to make. "It's not smarter than earlier models, just trained more cheaply," said AI research scientist Gary Marcus.
This Tweet is currently unavailable. It might be loading or has been removed.
The fact that DeepSeek was able to build a model that competes with OpenAI's models is pretty remarkable. Andrej Karpathy who co-founded OpenAI, posted on X, "Does this mean you don't need large GPU clusters for frontier LLMs? No, but you have to ensure that you're not wasteful with what you have, and this looks like a nice demonstration that there's still a lot to get through with both data and algorithms."
This Tweet is currently unavailable. It might be loading or has been removed.
Wharton AI professor Ethan Mollick said it's not about it's capabilities, but models that people currently have access to. "DeepSeek is a really good model, but it is not generally a better model than o1 or Claude" he said. "But since it is both free and getting a ton of attention, I think a lot of people who were using free 'mini' models are being exposed to what a early 2025 reasoner AI can do and are surprised."
This Tweet is currently unavailable. It might be loading or has been removed.
DeepSeek R1 breakout is a huge win for open source proponents who argue that democratizing access to powerful AI models, ensures transparency, innovation, and healthy competition. "To people who think 'China is surpassing the U.S. in AI,' the correct thought is 'open source models are surpassing closed ones,'" said Yann LeCun, chief AI scientist at Meta, which has supported open sourcing with its own Llama models.
This Tweet is currently unavailable. It might be loading or has been removed.
Computer scientist and AI expert Andrew Ng didn't explicitly mention the significance of R1 being an open source model, but highlighted how the DeepSeek disruption is a boon for developers, since it allows access that is otherwise gatekept by Big Tech.
"Today's 'DeepSeek selloff' in the stock market -- attributed to DeepSeek V3/R1 disrupting the tech ecosystem -- is another sign that the application layer is a great place to be," said Ng. "The foundation model layer being hyper-competitive is great for people building applications."
This Tweet is currently unavailable. It might be loading or has been removed.
Topics Artificial Intelligence DeepSeek
Apple 'doesn't give a damn,' marks Crimea as part of Russia on Apple MapsJ.K. Rowling trolls Donald Trump with the most British reference everGirls in India will receive solar lamps this International Women's Day for an important reasonCalifornia atmospheric rivers to intensify, make billionUber's new driver features could mean more destination discriminationApple 'doesn't give a damn,' marks Crimea as part of Russia on Apple MapsDaisy Ridley hints that her final 'Star Wars' scene is 'so sad'Climate change models have been accurate since the 1970sThe White House just plagiarized an ExxonMobil press releaseMotorola One Hyper has a 64From Hermes to Montblanc: A guide to the fanciest smartwatches of 2019Pizza dipped in milk is the most disrespectful pizza crime yetFacebook's Portal TV ad somehow stars Kim Kardashian West and JSubmit your product for Mashable's 'Best Tech of CES 2020'Google's cofounders are no longer running AlphabetHere is a glorious infographic ranking Kellyanne Conway's most embarrassing momentsKaty Perry walked the red carpet with quinoa stuck in her teeth. Trust no one.I'm living for the weird Disney+ '60s and '70s gems that I'd forgottenCalifornia atmospheric rivers to intensify, make billionThis Tesla Cybertruck sculpted out of mashed potatoes is a mashterpiece Djordje Ozbolt’s “More Paintings About Poets and Food” Alexis Arnold’s Frozen Books Apply to Be The Paris Review‘s Next Writer Staff Picks: DeLillo, Jean Merrill, Cabinet, and More Gary Indiana’s Art Recasts Voyeurism as Wonder Our Shrinking Vocabulary of Landscape Trying to Inject Meaning Into the Daily Grind Celebrate the Met’s Birthday with “Making the Mummies Dance” Trollope Gets His 65,000 Words Back Piglets & Ghosts: The Unique Thrills of Mexican Paperbacks The Mystery and Beauty of Richard Dadd’s “Fairy Feller” Strife in the Fast Lane “Mating” Book Club, Part 1: Chasing Waterfalls Mark Twain’s Advice for Curing a Cold Watch Anthony Burgess on the Dick Cavett Show, 1971 Yasmina Reza on the Frivolous and the Profound Presenting “Big, Bent Ears,” A New Multimedia Project What’s the Emotional Value of a Word? Staff Picks: Rage, Reggae, Reading Rooms by The Paris Review Victor Hugo’s Drawings
3.3527s , 8231.2734375 kb
Copyright © 2025 Powered by 【Japan Archives】,Wisdom Convergence Information Network