LLaMA v3 8B Instruct

LLaMA v3 8B Instruct

by

Meta

Model ID:

Llama-3-8b-instruct

Use This Model

Frequently Asked Questions

  1. What are the advantages of using the smaller 8B model over larger LLaMA models?
    The smaller 8B model offers advantages such as reduced computational resource requirements, faster inference times, and lower operational costs while still providing robust performance for many applications.

  2. How does LLaMA v3 8B Instruct perform in coding tasks?
    It performs adequately in coding tasks, capable of generating code snippets and providing explanations or debugging assistance, though larger models may offer more comprehensive capabilities.

  3. What is the context window size for LLaMA v3 8B Instruct?
    The context window size for LLaMA v3 8B Instruct is typically around 16,000 tokens.

Still have questions?

Cant find the answer you’re looking for? Please chat to our friendly team.

Get In Touch

Model Specifications

Release Date:

18/4/2024

Max. Context Tokens:

128K

Max. Output Tokens:

8K

Knowledge Cut-Off Date:

March 2023

MMLU:

68

%

License:

Open-Source

Technical Report/Model Card:

LMSys ELo Score

1152

Pricing

$/Million Input Tokens

$

0.2

$/Million Output Tokens

$

0.2

Live updates via Portkey Pricing API. Coming Soon...

© 2024 Portkey, Inc. All rights reserved

LLaMA v3 8B Instruct

LLaMA v3 8B Instruct

by

Meta

Model ID:

Llama-3-8b-instruct

Chat