opm-simulators
Loading...
Searching...
No Matches
Opm::gpuistl::GPUSender< field_type, OwnerOverlapCopyCommunicationType > Class Template Referenceabstract

GPUSender is a wrapper class for classes which will implement copOwnerToAll This is implemented with the intention of creating communicators with generic GPUSender To hide implementation that will either use GPU aware MPI or not. More...

#include <GpuSender.hpp>

Inheritance diagram for Opm::gpuistl::GPUSender< field_type, OwnerOverlapCopyCommunicationType >:
Opm::gpuistl::GPUAwareMPISender< field_type, block_size, OwnerOverlapCopyCommunicationType > Opm::gpuistl::GPUObliviousMPISender< field_type, block_size, OwnerOverlapCopyCommunicationType >

Public Types

using X = GpuVector<field_type>

Public Member Functions

 GPUSender (const OwnerOverlapCopyCommunicationType &cpuOwnerOverlapCopy)
virtual void copyOwnerToAll (const X &source, X &dest) const =0
 copyOwnerToAll will copy the data in source to all processes.
virtual void initIndexSet () const =0
void project (X &x) const
 project will project x to the owned subspace
void dot (const X &x, const X &y, field_type &output) const
 dot will carry out the dot product between x and y on the owned indices, then sum up the result across MPI processes.
field_type norm (const X &x) const
 norm computes the l^2-norm of x across processes.
const ::Dune::Communication< MPI_Comm > & communicator () const
 communicator returns the MPI communicator used by this GPUSender

Protected Attributes

std::once_flag m_initializedIndices
std::unique_ptr< GpuVector< int > > m_indicesOwner
std::unique_ptr< GpuVector< int > > m_indicesCopy
const OwnerOverlapCopyCommunicationType & m_cpuOwnerOverlapCopy

Detailed Description

template<class field_type, class OwnerOverlapCopyCommunicationType>
class Opm::gpuistl::GPUSender< field_type, OwnerOverlapCopyCommunicationType >

GPUSender is a wrapper class for classes which will implement copOwnerToAll This is implemented with the intention of creating communicators with generic GPUSender To hide implementation that will either use GPU aware MPI or not.

Template Parameters
field_typeis float or double
OwnerOverlapCopyCommunicationTypeis typically a Dune::LinearOperator::communication_type

Member Function Documentation

◆ communicator()

template<class field_type, class OwnerOverlapCopyCommunicationType>
const ::Dune::Communication< MPI_Comm > & Opm::gpuistl::GPUSender< field_type, OwnerOverlapCopyCommunicationType >::communicator ( ) const
inline

communicator returns the MPI communicator used by this GPUSender

Returns
the MPI communicator

◆ copyOwnerToAll()

template<class field_type, class OwnerOverlapCopyCommunicationType>
virtual void Opm::gpuistl::GPUSender< field_type, OwnerOverlapCopyCommunicationType >::copyOwnerToAll ( const X & source,
X & dest ) const
pure virtual

copyOwnerToAll will copy the data in source to all processes.

Note
Depending on the implementation, this may or may not use GPU aware MPI. If it does not use GPU aware MPI, the data will be copied to the CPU before the communication.
Parameters
[in]source
[out]dest

Implemented in Opm::gpuistl::GPUAwareMPISender< field_type, block_size, OwnerOverlapCopyCommunicationType >, and Opm::gpuistl::GPUObliviousMPISender< field_type, block_size, OwnerOverlapCopyCommunicationType >.

◆ dot()

template<class field_type, class OwnerOverlapCopyCommunicationType>
void Opm::gpuistl::GPUSender< field_type, OwnerOverlapCopyCommunicationType >::dot ( const X & x,
const X & y,
field_type & output ) const
inline

dot will carry out the dot product between x and y on the owned indices, then sum up the result across MPI processes.

Parameters
[in]xFirst vector in dot product
[in]ySecond vector in dot product
[out]outputresult will be stored here
Note
This uses the same interface as its DUNE equivalent.

◆ norm()

template<class field_type, class OwnerOverlapCopyCommunicationType>
field_type Opm::gpuistl::GPUSender< field_type, OwnerOverlapCopyCommunicationType >::norm ( const X & x) const
inline

norm computes the l^2-norm of x across processes.

This will compute the dot product of x with itself on owned indices, then sum the result across process and return the square root of the sum.

◆ project()

template<class field_type, class OwnerOverlapCopyCommunicationType>
void Opm::gpuistl::GPUSender< field_type, OwnerOverlapCopyCommunicationType >::project ( X & x) const
inline

project will project x to the owned subspace

For each component i which is not owned, x_i will be set to 0

Parameters
[in,out]xthe vector to project

The documentation for this class was generated from the following file: