LLaMA v3 70B Instruct

LLaMA v3 70B Instruct

by

Meta

Model ID:

Llama-3-70b-instruct

Use This Model

Frequently Asked Questions

  1. What are the key differences between LLaMA v3 70B Instruct and v3.1?
    Key differences include improvements in training data diversity and model tuning techniques in v3.1, which enhance overall performance and adaptability compared to the earlier version.

  2. How does LLaMA v3 70B Instruct perform in multilingual tasks?
    It performs well in multilingual tasks, demonstrating strong capabilities in understanding and generating text across various languages, although newer models may have improved performance metrics.

  3. What are the minimum system requirements for using LLaMA v3 70B Instruct?
    Minimum system requirements typically include a modern GPU with at least 16 GB of VRAM and sufficient RAM (32 GB or more) to handle model loading and inference efficiently.

Still have questions?

Cant find the answer you’re looking for? Please chat to our friendly team.

Get In Touch

Model Specifications

Release Date:

18/4/2024

Max. Context Tokens:

128K

Max. Output Tokens:

8K

Knowledge Cut-Off Date:

December 2023

MMLU:

76.8

%

License:

Open-Source

Technical Report/Model Card:

LMSys ELo Score

1206

Berkeley Function Calling Ability Score:

49.55

Pricing

$/Million Input Tokens

$

0.9

$/Million Output Tokens

$

0.9

Live updates via Portkey Pricing API. Coming Soon...

© 2024 Portkey, Inc. All rights reserved

LLaMA v3 70B Instruct

LLaMA v3 70B Instruct

by

Meta

Model ID:

Llama-3-70b-instruct

Chat