LLaMA v3.1 405B Instruct

LLaMA v3.1 405B Instruct

by

Meta

Model ID:

Llama-3.1-405b-instruct

Use This Model

Frequently Asked Questions

  1. What makes LLaMA v3.1 405B Instruct unique in the LLaMA family?
    LLaMA v3.1 405B Instruct is unique due to its massive parameter count of 405 billion, enabling it to handle complex tasks with high accuracy and depth of understanding, surpassing other models in the family in terms of capability.

  2. What are the hardware requirements for running LLaMA v3.1 405B Instruct?
    Running LLaMA v3.1 405B Instruct typically requires high-performance hardware, including multiple GPUs with substantial VRAM (at least 48 GB per GPU) and significant RAM to accommodate its large model size effectively.

  3. How does its performance compare to other large language models?
    Its performance is competitive with other leading large language models, often excelling in complex reasoning tasks and nuanced language understanding, making it suitable for advanced applications in various domains.

Still have questions?

Cant find the answer you’re looking for? Please chat to our friendly team.

Get In Touch

Model Specifications

Release Date:

23/7/2024

Model Variants:

accounts/fireworks/models/llama-v3p1-405b-instruct

Max. Context Tokens:

128K

Max. Output Tokens:

8K

Knowledge Cut-Off Date:

December 2023

MMLU:

87.3

%

License:

Open-Source

Technical Report/Model Card:

LMSys ELo Score

1266

Pricing

$/Million Input Tokens

$

3

$/Million Output Tokens

$

3

Live updates via Portkey Pricing API. Coming Soon...

© 2024 Portkey, Inc. All rights reserved

LLaMA v3.1 405B Instruct

LLaMA v3.1 405B Instruct

by

Meta

Model ID:

Llama-3.1-405b-instruct

Chat