
The example below shows you how to write four bytes to buffer memory accessible from the GPU. This process isn't straightforward because of the sandboxing model used in modern web browsers. Let's see how to use JavaScript to write data to memory for the GPU. For the sake of simplicity, we'll use the default options in this article. requestDevice ( ) īoth functions take options that allow you to be specific about the kind of adapter (power preference) and device (extensions, limits) you want. requestAdapter ( ) Ĭonst device = await adapter. Once you have the GPU adapter, call adapter.requestDevice() to get a promise that will resolve with a GPU device you'll use to do some GPU computation. It can either be integrated (on the same chip as the CPU) or discrete (usually a PCIe card that is more performant but uses more power). Think of this adapter as the graphics card. Calling () returns a JavaScript promise that will asynchronously resolve with a GPU adapter. # Access the GPUĪccessing the GPU is easy in WebGPU.

As GPU sandboxing isn't implemented yet for the WebGPU API, it is possible to read GPU data for other processes! Don't browse the web with it enabled. The API is constantly changing and currently unsafe. You can enable it at chrome://flags/#enable-unsafe-webgpu. WebGPU is available for now in Chrome Canary behind an experimental flag. I will be diving deeper and covering WebGPU rendering (canvas, texture, etc.) in forthcoming articles. In this article, I'm going to focus on the GPU Compute part of WebGPU and, to be honest, I'm just scratching the surface, so that you can start playing on your own. It is very powerful and quite verbose, as you'll see. With the current Web Platform lacking in GPU Compute capabilities, the W3C's "GPU for the Web" Community Group is designing an API to expose the modern GPU APIs that are available on most current devices. GPU Compute has contributed significantly to the recent machine learning boom, as convolution neural networks and other models can take advantage of the architecture to run more efficiently on GPUs. These capabilities are referred to as GPU Compute, and using a GPU as a coprocessor for general-purpose scientific computing is called general-purpose GPU (GPGPU) programming.

However, in the past 10 years, it has evolved towards a more flexible architecture allowing developers to implement many types of algorithms, not just render 3D graphics, while taking advantage of the unique architecture of the GPU. As you may already know, the Graphic Processing Unit (GPU) is an electronic subsystem within a computer that was originally specialized for processing graphics.
