Effortlessly create and manage powerful GPT prompts, mix AI models and providers.
Boost productivity, save time, and control your GPT solutions with Promptitude.
Combines supervised fine-tuning with large-scale reinforcement learning for superior logical inference and problem-solving. Multimodal capabilities under MIT license. More cost-efficient and scalable.
Supports complex reasoning, multiple languages, and various deployment options for efficient performance.
Sonar Huge excels in deep reasoning for complex queries, with state-of-the-art AI extending search to X and Reddit.
Enhances context recognition and supports multilingual interactions for diverse applications. Prioritizes safe third-party integrations.
Offers enhanced context windows and multilingual outputs for various use cases with 8B params. Ensures safe integration of third-party tools. Supports multiple languages effectively.
Advanced Online AI: Model excels in processing extensive natural language tasks.
Responsive Online AI: Optimized for real-time, fact-based communication.
GPT-4 Turbo with Vision: Enhancements include image processing and extended token output for efficiency.
Gemini 1.0 Pro Vision excels in diverse applications by processing text and visual inputs effectively, showcasing proficiency in multimodal tasks.
Explore Gemma 7B: a superior, responsible decoder transformer surpassing benchmarks in safe text generation.
Offering a great value proposition for scalable operations compared to similar models with competitive intelligence.
Fine-tuned model in the parameter size of 70B. Suitable for larger-scale tasks such as language modeling, text generation, and dialogue systems.
Command Light is a smaller version of Command, Cohere's generative LLM (6B parameters).
Llama 2 is a collection of pretrained and fine-tuned large language models (LLM) ranging in scale from 7 billion to 70 billion parameters.
Capable of very simple tasks, usually the fastest model in the GPT-3 series, and lowest cost [ada]
Capable of straightforward tasks, very fast, and lower cost than GPT-3 instruct [babbage]
Very capable, but faster and lower cost than GPT-3 instruct [curie]
As capable as GPT-3.5 chat, a bit more expensive, ideal for long prompts that require a lot of context [gpt-3.5-turbo-16k]
Recommended for most prompts: Fast and capable GPT-3.5 model at 1/10th the cost of GPT-3.5 instruct [gpt-3.5-turbo]
Capable of more complex text generation and analysis tasks, running on GPU, 20B parameters [gpt-neox-20b]
A faster version of GPT-J running on GPU, 6B parameters [fast-gpt-j]
Base model for simple tasks like classification, works best with few-shot examples, 6B parameters [gpt-j]
Cost-effective and highly customizable LLM. Right-sized for specific use cases, ideal for text generation tasks and fine-tuning.
Generates text in a conversational format. Optimized for dialog language tasks such as implementation of chat bots or AI agents. Can handle zero, one, and few-shot task
Generates text. Optimized for language tasks such as: Code generation, Text generation, Text editing, Problem solving, Recommendations generation and Information extraction.
A faster and cheaper yet still very capable model, which can handle a range of tasks including casual dialogue text analysis summarization and document question-answering.
Anthropic's most powerful model, which excels at a wide range of tasks from sophisticated dialogue and creative content generation to detailed instruction following.
An earlier version of Anthropic’s general purpose large language models.