Cloud servers specially designed for processing massively parallel tasks
GPU instances integrate NVIDIA Tesla V100S graphic processors to meet the requirements of massively parallel processing. Since they are integrated into the OVHcloud solution, you get the advantages of on-demand resources and hourly billing. These cloud servers are adapted to the needs of machine learning and deep learning.
Powered by NVIDIA Tesla V100S
These GPUs are among the most powerful on the market, and are designed for use in datacentres. They accelerate computing in the fields of artificial intelligence (AI) and graphic computing.
NVIDIA GPU Cloud
To provide the best user experience, OVH and NVIDIA have partnered up to offer a best-in-class GPU-accelerated platform, for deep learning and high-performance computing and artificial intelligence (AI). It is the simplest way to deploy and maintain GPU-accelerated containers, via a full catalogue. Find out more.
Between one and four cards with guaranteed performance
Tesla cards are delivered directly to the instance via PCI Passthrough, without a virtualisation layer, so that all of their power is dedicated to your use. Up to four cards can be connected to combine their performance. As a result, the hardware delivers all of its computing power to your application.
ISO/IEC 27001, 27701 and health data hosting compliance
Our cloud infrastructures and services are ISO/IEC 27001, 27017, 27018 and 27701 certified. These certifications ensure the presence of an information security management system (ISMS) for managing risks, vulnerabilities and implementing business continuity, as well as a privacy information management system (PIMS). Thanks to our health data hosting compliance, you can also host healthcare data securely.
NVIDIA Tesla V100S Features
Performance with NVIDIA GPU Boost |
Bidirectional connection bandwidth |
CoWoS Stacked HBM2 memory |
---|---|---|
|
|
|
Use cases
Image recognition
Extracting data from images to classify them, identify an element or build richer documents is necessary in many industries. With frameworks like Caffe2 combined with the Tesla V100S GPU, medical imaging, social networks, public protection and security become easily accessible.
Situation analysis
Real-time analysis is required in some cases, where an appropriate reaction is expected to face varied and unpredictable situations. This kind of technology is used for self-driving cars and internet network traffic analysis, for example. This is where deep learning comes in, to form neural networks that learn independently through a training stage.
Human interaction
In the past, people learned to communicate with machines. We are now in an era where machines are learning to communicate with people. Whether through speech recognition or the emotion recognition through sound and video, tools such as TensorFlow push the boundaries of these interactions, opening up a multitude of new uses.
Need to train your artificial intelligence with GPUs?
With our AI Training solution, you can train your AI models efficiently and easily, and optimise your GPU computing resources.
Focus on your business instead of the infrastructure that supports it. Launch your training tasks via a command line, and pay for the resources used by the minute.
Usage
Get started
Launch your instance by choosing a T2 model and NGC image for your project.
Configure
$ docker pull nvcr.io/nvidia/tensorflow
$ nvidia-docker run nvidia/tensorflow t2
Use
Your AI framework is ready for processing.
GPU billing
GPU instances are billed like all of our other instances, on a pay-as-you-go basis at the end of each month. The price depends on the size of the instance you have booted, and the duration of its use.
Other products
Your questions answered
What SLA does OVHcloud guarantee for a GPU instance?
The SLA guarantees 99.999% monthly availability on GPU instances. For further information, please refer to the Terms & conditions.
Which hypervisor is used for instance virtualisation?
Just like other instances, GPU instances are virtualised by the KVM hypervisor in the Linux kernel.
What is PCI Passthrough?
Cards with GPUs are served via the physical server's PCI bus. PCI Passthrough is a hypervisor feature that allows you to dedicate hardware to a virtual machine by giving direct access to the PCI bus, without going through virtualisation.
Can I resize a GPU instance?
Yes, GPU instances can be upgraded to a higher model after a reboot. However, they cannot be downgraded to a lower model.
Do GPU instances have anti-DDoS protection?
Yes, our anti-DDoS protection is included with all OVHcloud solutions at no extra cost.
Over the course of a month, can I switch to hourly billing from an instance that is currently billed monthly?
If you have monthly billing set up, you cannot switch to hourly billing over the course of the month. Before you launch an instance, please take care to select the billing method that is best suited to your project.