Jamba 1.5 Large

Jamba 1.5 Large

by

AI21

Model ID:

jamba-1.5-large

Use This Model

Frequently Asked Questions

  1. What are the key features of Jamba 1.5 Large's hybrid SSM-Transformer architecture?
    Jamba 1.5 Large utilizes a hybrid SSM-Transformer architecture that combines speed with superior long-context handling capabilities, allowing it to efficiently manage extensive inputs while maintaining high performance.

  2. How does Jamba 1.5 Large handle long-context tasks?
    It excels at long-context tasks with a maximum context length of 256,000 tokens, enabling it to process extensive information effectively without losing coherence.

  3. What is the inference speed of Jamba 1.5 Large compared to similar models?
    Jamba 1.5 Large delivers up to 2.5 times faster inference than comparable models, making it highly efficient for real-time applications.

Still have questions?

Cant find the answer you’re looking for? Please chat to our friendly team.

Get In Touch

Model Specifications

Release Date:

22/8/2024

Max. Context Tokens:

256K

Knowledge Cut-Off Date:

March 5, 2024

MMLU:

81.2

%

License:

Open-Source

Technical Report/Model Card:

LMSys ELo Score

1220

Pricing

$/Million Input Tokens

$

2

$/Million Output Tokens

$

8

Live updates via Portkey Pricing API. Coming Soon...

© 2024 Portkey, Inc. All rights reserved