Overview
The Ministral 8B model builds upon the foundation set by the 3B version, boasting 8 billion parameters that enhance its reasoning and function-calling capabilities. This model is also tailored for on-device use, making it suitable for a wide range of applications that require efficient processing without relying on cloud resources. The Ministral 8B outperforms its predecessor in various tasks, offering significant improvements in both speed and accuracy. It supports the same extensive context length of up to 128,000 tokens as the 3B model, utilizing a sliding window attention pattern that facilitates faster data processing. This makes it particularly effective for orchestrating workflows and creating specialized task workers across different domains, from software development to data analysis.
Specializations
High performance/price ratio: It offers excellent performance for its size, making it a cost-effective option for various applications.
Large context window: The 128k token context window enables it to process and generate longer, more coherent text outputs.
Multilingual support: It supports multiple languages, making it suitable for global applications.
Integration Guide (Javascript)
To use this model through Portkey, follow these steps:
1. Install Portkey SDK:
npm install --save portkey-ai
2. Set up client with Portkey:
// Import and initialize Portkey
import Portkey from 'portkey-ai'
const portkey = new Portkey({
apiKey: "PORTKEY_API_KEY", // Replace with your Portkey API key
virtualKey: "VIRTUAL_KEY" // Your Together AI Virtual Key created in Portkey
})
3. Make a request:
const chatCompletion = await portkey.chat.completions.create({
messages: [{ role: 'user', content: 'Say this is a test' }],
model: 'mistralai/ministral-8B',
});
console.log(chatCompletion.choices);
Model Specifications
Release Date:
9/10/2024
Max. Context Tokens:
128K
Max. Output Tokens:
4K
Model Size
8B
License:
Open-Source
© 2024 Portkey, Inc. All rights reserved