new

improved

fixed

Release: 3/28/24

NEW
  • A100's for all
    : We've heard your feedback and moved
    all
    training jobs from A10G to A100 GPUs by default!
  • Faster Training Times
    : In addition, we've also modified the default configurations to speed up A100 training by 2x - 5x
  • New Tutorial
    : We've added a new “Build Your Own Lora Land” Tutorial to the homepage and docs
  • Mistral-7b-instruct-v0.2 now available
    : You can now query Mistral-7b-instruct-v0.2 as a Serverless Endpoint as well as use it as for Dedicated Deployments & Fine-Tuning
  • Deployments UI
    : We've added the ability to seamlessly create deployments directly from the UI
  • Ability to query Stopped Models
    : We've changed the "Cancel" operation to "Stop" during training, allowing you to use the model from the latest saved checkpoint.
New Quickstart
Screenshot 2024-03-29 at 11
Create Deployments via the UI
Screenshot 2024-03-29 at 11
IMPROVED
Prompt UI Improvements
  • We now show all deployments (regardless of the status) in the dropdown
  • We show the the status chip next to deployment name in dropdown and in status indicator
  • We also provide a "Stop" Button while a response is streaming back to UI
Pricing
  • We've updated the serverless pricing bucket (Up to 13B) to include larger models (Up to 21B) at the same price ($0.25 / 1k tokens)
  • We've now enabled billing for serverless inference including both the streaming and non-streaming endpoints
FIXED
  • Prompt UI: We now prevent streaming of multiple responses at the same time
  • Models UI: We fixed the bug where 0 values didn't show in the Learning Curves
  • We've fixed the hanging behavior while using the Python SDK in Colab
  • We've also improved error messages in the SDK so they're concise and readable