Want to Become a Sponsor? Contact Us Now!🎉

LLM
Miqu-1-70B: The Leaked Mistral Large LLM?

Miqu-1-70B: The Leaked Language Model Pushing the Boundaries of Open-Source AI

In late January 2024, the AI community was set abuzz by the sudden appearance of a new large language model called "Miqu-1-70B". Uploaded to the open-source platform HuggingFace by a user named "Miqu Dev", the model quickly garnered attention for its impressive performance on various benchmarks, rivaling industry giants like GPT-4 and GPT-3.5. As speculation grew that Miqu-1-70B was a leaked version of Mistral AI's unreleased model, the implications for the future of open-source AI became increasingly apparent.

Published on

A comprehensive analysis of the Miqu-1-70B language model, its impressive benchmarks, comparisons to leading models, and a guide to running it locally.

Want to learn the latest LLM News? Check out the latest LLM leaderboard!

Anakin AI - The Ultimate No-Code AI App Builder

The Leak Heard Around the AI World

On January 28, 2024, "Miqu Dev" uploaded a set of files to HuggingFace, revealing the Miqu-1-70B model. Simultaneously, an anonymous user, possibly "Miqu Dev" himself, posted a link to the files on 4chan, igniting widespread interest and discussion in the AI community.

Suspicions quickly arose that Miqu-1-70B was a quantized version of Mistral AI's unreleased Mistral Medium model, given the similarities in prompt format and interaction style. These suspicions were confirmed by Mistral CEO Arthur Mensch, who acknowledged that an older, quantized version of their model had been leaked by an employee.

Technical Specifications and Architecture

Under the hood, Miqu-1-70B is a 70 billion parameter model based on Meta's Llama 2 architecture. It has been quantized to run on less than 24GB of VRAM, making it more accessible to users without high-end hardware. The model boasts a 1,000,000 theta value and a 32K maximum context window, setting it apart from standard Llama 2 and CodeLlama models.

Benchmarks and Comparisons: Miqu-1-70B Holds Its Own

Despite being a leaked and quantized model, Miqu-1-70B has demonstrated remarkable performance on various benchmarks, approaching the capabilities of leading models like GPT-4.

On a multi-choice question test, Miqu-1-70B correctly answered 17 out of 18 questions, just one point shy of GPT-4's perfect score. It also achieved an impressive 83.5 on the EQ-Bench, nearing GPT-4's level of emotional intelligence.

In terms of perplexity, Miqu-1-70B is comparable to fine-tuned Llama 2 70B models, scoring less than 4 at a context length of 512. This outperforms the nerfed CodeLlama 70B model, which has a perplexity of around 5.5 at the same context length.

ModelParametersPerplexityMMLUEQ-Bench
Miqu-1-70B70B~4 @ 51270+83.5
GPT-4????
GPT-3.5175B???
Llama 2 70B70B~4 @ 512??
CodeLlama 70B70B~5.5 @ 512??
Claude????
Mistral/Mixtral-8x7B-Instruct56B???

While comprehensive benchmark data for all models is not available, Miqu-1-70B's performance suggests that it is competitive with leading proprietary models like GPT-4 and GPT-3.5, as well as Mistral's own Mixtral-8x7B-Instruct model.

Running Miqu-1-70B Locally: A Step-by-Step Guide

For those eager to experiment with Miqu-1-70B, it is possible to run the model locally using Transformers library to run Miqu-1-70B in Python:

from transformers import LlamaForCausalLM, LlamaTokenizer
 
tokenizer = LlamaTokenizer.from_pretrained("NousResearch/Llama-2-7b-hf")  
input_ids = tokenizer("[INST] eloquent high camp prose about a cute catgirl [/INST]", return_tensors='pt').input_ids.cuda()
 
model = LlamaForCausalLM.from_pretrained("152334H/miqu-1-70b-sf", device_map='auto')
 
outputs = model.generate(input_ids, use_cache=False, max_new_tokens=200)
print(tokenizer.decode(outputs))

Implications and Future Outlook

The leak of Miqu-1-70B has significant implications for the future of open-source AI development. It demonstrates the rapid progress being made in creating powerful, accessible models that can rival the performance of proprietary systems like GPT-4.

Mistral CEO Arthur Mensch's response to the leak suggests a potential shift towards a more collaborative approach in handling such incidents. Rather than pursuing legal action, Mensch acknowledged the leak and expressed excitement for the community's engagement with the model.

As we await Mistral's next official releases, which are expected to surpass the capabilities of Miqu-1-70B, the AI community is abuzz with anticipation. The success of Miqu-1-70B has set a new benchmark for open-source models and has sparked discussions about the potential for new paradigms in AI development and collaboration.

Conclusion

The emergence of Miqu-1-70B has sent shockwaves through the AI community, showcasing the immense potential of open-source models to compete with industry leaders. Its impressive performance on benchmarks and its ability to run locally have made it a subject of great interest among researchers and enthusiasts alike.

As we witness the rapid evolution of AI technology, the Miqu-1-70B leak serves as a reminder of the importance of innovation, collaboration, and the power of the open-source community in driving progress. With models like Miqu-1-70B pushing the boundaries of what is possible, we can expect to see even more groundbreaking developments in the near future.

Want to learn the latest LLM News? Check out the latest LLM leaderboard!

Anakin AI - The Ultimate No-Code AI App Builder