LLaMA v3.1 70B Instruct

LLaMA v3.1 70B Instruct

by

Meta

Model ID:

Llama-3.1-70b-instruct

Use This Model

Frequently Asked Questions

  1. What are the main use cases for LLaMA v3.1 70B Instruct?
    Main use cases include chatbots, content creation, summarization tasks, and interactive applications where coherent and contextually aware language generation is essential.

  2. How does it compare to GPT-4 in terms of performance?
    While both models perform well, LLaMA v3.1 70B Instruct may show varying results depending on the specific task; it generally competes closely with GPT-4 but may excel in certain areas like multilingual tasks or specific instruction-following scenarios.

  3. What is the context window size for LLaMA v3.1 70B Instruct?
    The context window size for LLaMA v3.1 70B Instruct is 32,000 tokens.

Still have questions?

Cant find the answer you’re looking for? Please chat to our friendly team.

Get In Touch

Model Specifications

Release Date:

23/7/2024

Max. Context Tokens:

128K

Max. Output Tokens:

8K

Knowledge Cut-Off Date:

December 2023

MMLU:

76.8

%

License:

Open-Source

Technical Report/Model Card:

LMSys ELo Score

1248

Pricing

$/Million Input Tokens

$

0.9

$/Million Output Tokens

$

0.9

Live updates via Portkey Pricing API. Coming Soon...

© 2024 Portkey, Inc. All rights reserved

LLaMA v3.1 70B Instruct

LLaMA v3.1 70B Instruct

by

Meta

Model ID:

Llama-3.1-70b-instruct

Chat