LLaMA 3.1 Sonar Small 128k Online

LLaMA 3.1 Sonar Small 128k Online

LLaMA 3.1 Sonar Small 128k Online

by

Perplexity

Model ID:

llama-3.1-sonar-small-128k-online

Use This Model

Model Overview

FAQs

Related Links

Overview

LLaMA 3.1 Sonar Small 128k Online is a smaller variant of the Sonar model, also optimized for online applications with a 128,000 token context. It offers a balance between performance and efficiency, making it suitable for applications that require rapid responses without compromising on quality.

Specializations

  • Online accessibility: This model is readily available online, making it convenient for quick interactions and real-time applications.

  • Compact size: With a smaller size, it offers a good balance between performance and resource efficiency, suitable for less demanding tasks.

  • Focused on factual responses: It excels at providing accurate and informative answers based on real-world knowledge.

Integration Guide (Javascript)

To use this model through Portkey, follow these steps:

1. Install Portkey SDK:

npm install --save portkey-ai

2. Set up client with Portkey:

import Portkey from 'portkey-ai'

const portkey = new Portkey({

apiKey: "PORTKEY_API_KEY",

virtualKey: "VIRTUAL_KEY" // Your Perplexity AI Virtual Key

})

3. Make a request:

const chatCompletion = await portkey.chat.completions.create({

messages: [{ role: 'user', content: 'Say this is a test' }],

model: 'llama-3-1-sonar-small-128k-online',

})

console.log(chatCompletion.choices[0].message.content);

Model Specifications

Release Date:

12/2/2024

Max. Context Tokens:

128K

Model Size

8B

License:

Open-Source

Pricing

$/Million Input Tokens

$

0.2

$/Million Output Tokens

$

0.2

Live updates via Portkey Pricing API. Coming Soon...

© 2024 Portkey, Inc. All rights reserved