Mixtral 8x22B Instruct

Mixtral 8x22B Instruct

by

Mistral

Model ID:

mixtral-8x22b-instruct

Use This Model

Frequently Asked Questions

  1. What is unique about the architecture of Mixtral 8x22B Instruct?
    The architecture of Mixtral 8x22B Instruct features enhancements that allow it to effectively manage instruction-following tasks with improved coherence and relevance in generated responses compared to traditional transformer architectures.

  2. How does Mixtral 8x22B Instruct compare to other models in its size range?
    It stands out among other models in its size range due to its specialized training focused on instruction-following capabilities, often yielding better results in relevant benchmarks compared to similarly sized models.

  3. What are the main strengths of Mixtral 8x22B Instruct?
    The main strengths include its ability to generate high-quality responses based on user instructions, versatility across various language tasks, and strong performance metrics that make it suitable for diverse applications.

Still have questions?

Cant find the answer you’re looking for? Please chat to our friendly team.

Get In Touch

Model Specifications

Release Date:

28/4/2024

Max. Context Tokens:

32K

Max. Output Tokens:

4K

Knowledge Cut-Off Date:

October 2021

MMLU:

77.75

%

License:

Open-Source

Technical Report/Model Card:

LMSys ELo Score

1147

Pricing

$/Million Input Tokens

$

1.2

$/Million Output Tokens

$

1.2

Live updates via Portkey Pricing API. Coming Soon...

© 2024 Portkey, Inc. All rights reserved

Mixtral 8x22B Instruct

Mixtral 8x22B Instruct

by

Mistral

Model ID:

mixtral-8x22b-instruct

Chat