Simon Willison's Weblog
tanishacass186 editou esta página há 10 meses atrás


That design was trained in part utilizing their unreleased R1 "thinking" design. Today they've launched R1 itself, in addition to a whole family of brand-new designs obtained from that base.

There's a lot of things in the new release.

DeepSeek-R1-Zero appears to be the base model. It's over 650GB in size and, like the majority of their other releases, is under a tidy MIT license. DeepSeek caution that "DeepSeek-R1-Zero encounters challenges such as limitless repetition, poor readability, and language mixing." ... so they likewise released:

DeepSeek-R1-which "incorporates cold-start information before RL" and "attains performance comparable to OpenAI-o1 throughout mathematics, code, and thinking jobs". That a person is likewise MIT licensed, and is a similar size.

I do not have the capability to run designs bigger than about 50GB (I have an M2 with 64GB of RAM), so neither of these two designs are something I can quickly play with myself. That's where the brand-new distilled models are available in.

To support the research study neighborhood, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense designs distilled from DeepSeek-R1 based upon Llama and Qwen.

This is an interesting flex! They have designs based on Qwen 2.5 (14B, 32B, Math 1.5 B and Math 7B) and Llama 3 (Llama-3.1 8B and Llama 3.3 70B Instruct).

Weirdly those Llama designs have an MIT license connected, which I'm uncertain is compatible with the underlying Llama license. Qwen models are Apache certified so possibly MIT is OK?

(I also just saw the MIT license files say "Copyright (c) 2023 DeepSeek" so they may need to pay a bit more attention to how they copied those in.)

Licensing aside, these distilled models are fascinating monsters.

Running DeepSeek-R1-Distill-Llama-8B-GGUF

Quantized versions are already starting to show up. So far I have actually attempted simply one of those- unsloth/DeepSeek-R 1-Distill-Llama-8B-GGUF released by Unsloth AI-and it's really fun to play with.

I'm running it utilizing the combination of Ollama, LLM and the llm-ollama plugin.

First I fetched and ran the model utilizing Ollama itself:

This downloads around 8.5 GB of design information and begins an interactive chat user interface.

Once the model has actually been brought, LLM can speak with it too. I choose utilizing LLM for experiments due to the fact that it logs whatever to SQLite for later expedition.

I installed the plugin and ran it like this:

Each response starts with a ... pseudo-XML tag containing the chain of idea used to assist create the response. For "a joke about a pelican and a walrus who run a tea space together" that started like this:

Okay, so I require to come up with a joke about a pelican and a walrus running a tea space together. Hmm, that's an intriguing mix. Let me think of how these 2 characters may connect in an amusing scenario.

First, let's consider their characteristics. Pelicans are known for their long beaks and Webbed feet, typically seen near the beach or water. Walruses have huge teeth, thick fur, and they're normally found in chillier climates, like icebergs or snowy areas. So, combining these 2 into a tea room setting is already an amusing image.

[...]
It then thought for 20 paragraphs before outputting the joke! Here's the full records and here's the output at the end, after the tag:

The Joke:

A pelican and a walrus choose to open a quaint little tea room together. The walrus, with its huge size, struggles to discover comfortable chairs, menwiki.men so it rests on the table by mishap, overturning the teapot. Meanwhile, the pelican, trying to help, uses its beak to place dishes on the table, causing a couple of spills.

After a series of humorous mishaps, the walrus looks up and says with a smile, "This isn't so fishy any longer." The pelican smirks and says, "Maybe not, however we do have a great deal of krill in our tea!"

Explanation:

- Setup: Pelican and Walrus run a not likely tea space. Their sizes lead to comical chaos.