WebGPU is an innovative web standard that offers direct access to graphics processors (GPUs) within web browsers. Together with Transformers.js, the State-of-the-art Machine Learning for the Web, this enables you to use your underlying system’s GPU to carry out high-performance computations directly in your browser. Without the need of using the GetGenius Cloud or any other third-party integrations.
WebGPU’s potential to transform is immense. By offering direct, low-level access to modern graphics processors, WebGPU allows web applications to perform tasks that require high computational power, such as AI inference with GetGenius, and can save you costs.
This technology enables use cases such as:
Text-Generation: Using Large-Language Models for text-generation and translations for your blog posts, newsletters, product descriptions, social-media campaigns and more.
Text-Classificiations: Analyzing your text content and assigning it with keywords, such as sentiment analysis.
Image-Generation: Generates images from input text or source images for blog posts, newsletters and social-media campaigns
Image-Classifications: Analyzing your media library and assigning it with keywords.
Image-Segmentation: Dividing images into segments where each pixel is mapped to an object.
Mask Generation: Generate masks for the objects in an image.
Audio Generation: Generating natural-sounding speech with Text-To-Speech and Audio-To-Audio models.
Audio Classifications: Analyzing your source audio and assigning it with keywords.
Transcription: Transcribing a given audio into text.
Model training: Train your own AI models with reinforcement learning from feedback.
WebGPU with GetGenius #
In GetGenius, we offer you the possibility to use suitable AI models for specific tasks with WebGPU. This requires an end device with an appropriate browser and GPU.
Under settings, we offer you a comprehensive test to check whether your device and browser meets the requirements for specific tasks.
In order to use WebGPU, you can change the AI provider in your AI settings to Local.
At our AI Store, we offer a comprehensive selection of models capable of local execution in your browser.
You can change these settings at any time.
When starting the Inference GetGenius is loading the model in your Browser. This may take some while.
Please note that currently only small or quantized models can be executed in the browser and therefore offer only limited possibilities. If you need high-quality output, we are still recommend to use the GetGenius Cloud, third-party providers, such as Anthropic, or your own compute power, such as AWS, Hyperstack, Lambda or Replicate.