AI Productivity Tool
What are the pricing plans for MeteronAI and what features do they include?
MeteronAI offers three different pricing plans: Free, Professional, and Business. The Free plan is $0 per month and includes 1 admin or member, 5GB of storage, 1500 image generations, and 10,000 LLM chat completions. The Professional plan costs $39 per month and provides 5 admins or members, 300GB of storage, 10,000 image generations, and 50,000 LLM chat completions, along with features such as per-user metering, a credit system, and elastic queue for absorbing high demand spikes. The Business plan, costing $199 per month, includes 30 admins or members, 2TB of storage, 100,000 image generations, and 800,000 LLM chat completions. Features exclusive to the Business plan include intelligent QoS, custom cloud storage (coming soon), and data export (coming soon). Each plan allows for purchasing additional storage and image generations and upgrading at any time.
How does MeteronAI handle load balancing and request queues?
MeteronAI features automatic load balancing and an elastic scaling mechanism that can queue up and load-balance requests across servers efficiently. Users can define the number of active servers, and more can be added at any time. MeteronAI also supports intelligent queue prioritization, where requests are organized according to priority classes (high, medium, low). High-priority requests, often from VIP users, are processed without queueing delays. Medium priority requests might incur some delays but will be prioritized over low-priority ones. Low-priority requests are processed last, suitable for free users when the system is not under heavy load.
Can MeteronAI work with any AI model, and does it require specific libraries for integration?
Yes, MeteronAI is compatible with any AI model, supporting both text and image generation models such as Llama, Mistral, Stable Diffusion, and DALL-E. It does not require any special libraries for integration, allowing developers to use their preferred HTTP client like curl, Python requests, or JavaScript fetch libraries. Instead of directing requests to an inference endpoint, users will send them to Meteron's generation API. This flexibility means developers can continue using tools they are already familiar with, thus simplifying the process of integrating MeteronAI into their projects.