LLaMA 3.1 Sonar Large 128k Online

LLaMA 3.1 Sonar Large 128k Online

LLaMA 3.1 Sonar Large 128k Online

by

Perplexity

Model ID:

llama-3.1-sonar-large-128k-online

Use This Model

Model Overview

FAQs

Related Links

Overview

LLaMA 3.1 Sonar Large 128k Online is a large language model designed for extensive online applications, capable of processing long contexts of up to 128,000 tokens. This model is particularly effective for tasks that require deep contextual understanding and long-form content generation.

Specializations

  • Enhanced capabilities: Compared to the smaller version, it offers improved performance and a broader range of capabilities, including more complex reasoning and creative text generation.

  • Online accessibility: Like its smaller counterpart, it's available online, allowing for easy integration into various applications.

  • Large context window: The 128k token context window enables it to process and generate longer, more coherent text outputs.

Integration Guide (Javascript)

To use this model through Portkey, follow these steps:

1. Install Portkey SDK:

npm install --save portkey-ai

2. Set up client with Portkey:

import Portkey from 'portkey-ai'

const portkey = new Portkey({

apiKey: "PORTKEY_API_KEY",

virtualKey: "VIRTUAL_KEY" // Your Perplexity AI Virtual Key

})

3. Make a request:

const chatCompletion = await portkey.chat.completions.create({

messages: [{ role: 'user', content: 'Say this is a test' }],

model: 'llama-3-1-sonar-large-128k-online',

});

console.log(chatCompletion.choices[0].message.content);

Model Specifications

Release Date:

12/2/2024

Max. Context Tokens:

128K

Model Size

70B

License:

Open-Source

Pricing

$/Million Input Tokens

$

0.001

$/Million Output Tokens

$

0.001

Live updates via Portkey Pricing API. Coming Soon...

© 2024 Portkey, Inc. All rights reserved