ALP User Documentation 0.7.alpha
Algebraic Programming User Documentation
Loading...
Searching...
No Matches
Namespaces | Functions
grb::algorithms Namespace Reference

The namespace for ALP/GraphBLAS algorithms. More...

Namespaces

namespace  pregel
 The namespace for ALP/Pregel algorithms.
 

Functions

template<Descriptor descr = descriptors::no_operation, typename IOType , typename NonzeroType , typename InputType , typename ResidualType , class Semiring = Semiring< operators::add< InputType, InputType, InputType >, operators::mul< IOType, NonzeroType, InputType >, identities::zero, identities::one >, class Minus = operators::subtract< ResidualType >, class Divide = operators::divide< ResidualType >>
RC bicgstab (grb::Vector< IOType > &x, const grb::Matrix< NonzeroType > &A, const grb::Vector< InputType > &b, const size_t max_iterations, ResidualType tol, size_t &iterations, ResidualType &residual, Vector< InputType > &r, Vector< InputType > &rhat, Vector< InputType > &p, Vector< InputType > &v, Vector< InputType > &s, Vector< InputType > &t, const Semiring &semiring=Semiring(), const Minus &minus=Minus(), const Divide &divide=Divide())
 Solves a linear system \( b = Ax \) with \( x \) unknown by using the bi-conjugate gradient (bi-CG) stabilised method; i.e., BiCGstab.
 
template<Descriptor descr = descriptors::no_operation, typename IOType , typename ResidualType , typename NonzeroType , typename InputType , class Ring = Semiring< grb::operators::add< IOType >, grb::operators::mul< IOType >, grb::identities::zero, grb::identities::one >, class Minus = operators::subtract< IOType >, class Divide = operators::divide< IOType >>
grb::RC conjugate_gradient (grb::Vector< IOType > &x, const grb::Matrix< NonzeroType > &A, const grb::Vector< InputType > &b, const size_t max_iterations, ResidualType tol, size_t &iterations, ResidualType &residual, grb::Vector< IOType > &r, grb::Vector< IOType > &u, grb::Vector< IOType > &temp, const Ring &ring=Ring(), const Minus &minus=Minus(), const Divide &divide=Divide())
 Solves a linear system \( b = Ax \) with \( x \) unknown by the Conjugate Gradients (CG) method on general fields.
 
template<Descriptor descr = descriptors::no_operation, typename OutputType , typename InputType1 , typename InputType2 , class Ring , class Division = grb::operators::divide< typename Ring::D3, typename Ring::D3, typename Ring::D4 >>
RC cosine_similarity (OutputType &similarity, const Vector< InputType1 > &x, const Vector< InputType2 > &y, const Ring &ring=Ring(), const Division &div=Division())
 Computes the cosine similarity.
 
template<Descriptor descr = descriptors::no_operation, typename IOType = double, class Operator = operators::square_diff< IOType, IOType, IOType >>
RC kmeans_iteration (Matrix< IOType > &K, Vector< std::pair< size_t, IOType > > &clusters_and_distances, const Matrix< IOType > &X, const size_t max_iter=1000, const Operator &dist_op=Operator())
 The kmeans iteration given an initialisation.
 
template<Descriptor descr, typename OutputType , typename InputType >
RC knn (Vector< OutputType > &u, const Matrix< InputType > &A, const size_t source, const size_t k, Vector< bool > &buf1)
 Given a graph and a source vertex, indicates which vertices are contained within k hops.
 
template<Descriptor descr = descriptors::no_operation, typename IOType = double, class Operator = operators::square_diff< IOType, IOType, IOType >>
RC kpp_initialisation (Matrix< IOType > &K, const Matrix< IOType > &X, const Operator &dist_op=Operator())
 a simple implementation of the k++ initialisation algorithm for kmeans
 
template<typename IOType >
RC label (Vector< IOType > &out, const Vector< IOType > &y, const Matrix< IOType > &W, const size_t n, const size_t l, const size_t maxIterations=1000)
 The label propagation algorithm.
 
template<Descriptor descr, class Ring , typename IOType , typename InputType >
RC mpv (Vector< IOType > &u, const Matrix< InputType > &A, const size_t k, const Vector< IOType > &v, Vector< IOType > &temp, const Ring &ring)
 The matrix powers kernel.
 
template<Descriptor descr = descriptors::no_operation, class Ring , typename InputType , typename OutputType , Backend backend, typename Coords >
RC norm2 (OutputType &x, const Vector< InputType, backend, Coords > &y, const Ring &ring=Ring(), const typename std::enable_if< std::is_floating_point< OutputType >::value, void >::type *const =nullptr)
 Provides a generic implementation of the 2-norm computation.
 
template<Descriptor descr = descriptors::no_operation, typename IOType , typename NonzeroT >
RC simple_pagerank (Vector< IOType > &pr, const Matrix< NonzeroT > &L, Vector< IOType > &pr_next, Vector< IOType > &pr_nextnext, Vector< IOType > &row_sum, const IOType alpha=0.85, const IOType conv=0.0000001, const size_t max=1000, size_t *const iterations=nullptr, double *const quality=nullptr)
 The canonical PageRank algorithm.
 
template<Descriptor descr = descriptors::no_operation, typename IOType , typename WeightType , typename BiasType , typename ThresholdType = IOType, class MinMonoid = Monoid< grb::operators::min< IOType >, grb::identities::infinity >, class ReluMonoid = Monoid< grb::operators::relu< IOType >, grb::identities::negative_infinity >, class Ring = Semiring< grb::operators::add< IOType >, grb::operators::mul< IOType >, grb::identities::zero, grb::identities::one >>
grb::RC sparse_nn_single_inference (grb::Vector< IOType > &out, const grb::Vector< IOType > &in, const std::vector< grb::Matrix< WeightType > > &layers, const std::vector< BiasType > &biases, const ThresholdType threshold, grb::Vector< IOType > &temp, const ReluMonoid &relu=ReluMonoid(), const MinMonoid &min=MinMonoid(), const Ring &ring=Ring())
 Performs an inference step of a single data element through a Sparse Neural Network defined by num_layers sparse weight matrices and num_layers biases.
 
template<Descriptor descr = descriptors::no_operation, typename IOType , typename WeightType , typename BiasType , class ReluMonoid = Monoid< grb::operators::relu< IOType >, grb::identities::negative_infinity >, class Ring = Semiring< grb::operators::add< IOType >, grb::operators::mul< IOType >, grb::identities::zero, grb::identities::one >>
grb::RC sparse_nn_single_inference (grb::Vector< IOType > &out, const grb::Vector< IOType > &in, const std::vector< grb::Matrix< WeightType > > &layers, const std::vector< BiasType > &biases, grb::Vector< IOType > &temp, const ReluMonoid &relu=ReluMonoid(), const Ring &ring=Ring())
 Performs an inference step of a single data element through a Sparse Neural Network defined by num_layers sparse weight matrices and num_layers biases.
 
template<bool normalize = false, typename IOType >
RC spy (grb::Matrix< IOType > &out, const grb::Matrix< bool > &in)
 Specialisation for boolean input matrices in.
 
template<bool normalize = false, typename IOType , typename InputType >
RC spy (grb::Matrix< IOType > &out, const grb::Matrix< InputType > &in)
 Given an input matrix and a smaller output matrix, map nonzeroes from the input matrix into the smaller one and count the number of nonzeroes that are mapped from the bigger matrix into the smaller.
 
template<bool normalize = false, typename IOType >
RC spy (grb::Matrix< IOType > &out, const grb::Matrix< void > &in)
 Specialisation for void input matrices in.
 

Detailed Description

The namespace for ALP/GraphBLAS algorithms.

Function Documentation

◆ bicgstab()

RC bicgstab ( grb::Vector< IOType > &  x,
const grb::Matrix< NonzeroType > &  A,
const grb::Vector< InputType > &  b,
const size_t  max_iterations,
ResidualType  tol,
size_t &  iterations,
ResidualType &  residual,
Vector< InputType > &  r,
Vector< InputType > &  rhat,
Vector< InputType > &  p,
Vector< InputType > &  v,
Vector< InputType > &  s,
Vector< InputType > &  t,
const Semiring semiring = Semiring(),
const Minus &  minus = Minus(),
const Divide &  divide = Divide() 
)

Solves a linear system \( b = Ax \) with \( x \) unknown by using the bi-conjugate gradient (bi-CG) stabilised method; i.e., BiCGstab.

Template Parameters
descrAny descriptor to use for the computation (optional).
IOTypeThe solution vector element type.
NonzeroTypeThe system matrix entry type.
InputTypeThe element type of the right-hand side vector.
ResidualTypeThe type of the residuals used during computation.
SemiringThe semiring under which to perform the BiCGstab
MinusThe inverse operator of the additive operator of Semiring.
DivideThe inverse of the multiplicative operator of Semiring.

By default, these will be the regular add, mul, subtract, and divide over the types IOType, NonzeroType, InputType, and/or ResidualType, as appropriate.

Does not perform any preconditioning.

Parameters
[in,out]xOn input: an initial guess to the solution \( Ax = b \). On output: if grb::SUCCESS is returned, the solution to \( Ax=b \) within the given tolerance tol. Otherwise, the last computed approximation to the solution is returned.
[in]AThe square non-singular system matrix \( A \).
[in]bThe right-hand side vector \( b \).

If the size of \( A \) is \( n \times n \), then the sizes of x and b must be \( n \) also. The vector x must have capacity \( n \).

Mandatory inputs to the BiCGstab algorithm:

Parameters
[in]max_iterationsThe maximum number of iterations this algorithm may perform.
[in]tolThe relative tolerance which determines when an an approximated solution \( x \) becomes acceptable. Must be positive and non-zero.

Additional outputs of this algorithm:

Parameters
[out]iterationsWhen grb::SUCCESS is returned, the number of iterations that were required to obtain an acceptable approximate solution.
[out]residualWhen grb::SUCCESS is returned, the square of the 2-norm of the residual; i.e., \( (r,r) \), where \( r = b - Ax \).

To operate, this algorithm requires a workspace consisting of six vectors of length and capacity \( n \). If vectors with less capacity are passed as arguments, grb::ILLEGAL will be returned.

Parameters
[in]r,rhat,p,v,s,tWorkspace vectors required for BiCGstab.

The BiCGstab operates over a field defined by the following algebraic structures:

Parameters
[in]semiringDefines the domains as well as the additive and the multicative monoid.
[in]minusThe inverse of the additive operator.
[in]divideThe inverse of the multiplicative operator.
Note
When compiling with the _DEBUG macro defined, the print-out statements require sqrt as an additional algebraic concept. This concept presently lives "outside" of ALP.

Valid descriptors to this algorithm are:

  1. descriptors::no_casting
  2. descriptors::transpose
Returns
grb::SUCCESS If an acceptable solution is returned.
grb::FAILED If the algorithm failed to find an acceptable solution and returns an approximate one with the given residual.
grb::MISMATCH If two or more of the input arguments have incompatible sizes.
grb::MISMATCH If one or more of the workspace vectors has an incompatible size.
grb::ILLEGAL If tol is zero or negative.
grb::ILLEGAL If x has capacity less than \( n \).
grb::ILLEGAL If one or more of the workspace vectors has a capacity less than \( n \).
grb::PANIC If an unrecoverable error has been encountered. The output as well as the state of ALP/GraphBLAS is undefined.
Performance semantics
  1. This function does not allocate nor free dynamic memory, nor shall it make any system calls.

For performance semantics regarding work, inter-process data movement, intra-process data movement, synchronisations, and memory use, please see the specification of the ALP primitives this function relies on. These performance semantics, with the exception of getters such as grb::nnz, are specific to the backend selected during compilation.

◆ conjugate_gradient()

grb::RC conjugate_gradient ( grb::Vector< IOType > &  x,
const grb::Matrix< NonzeroType > &  A,
const grb::Vector< InputType > &  b,
const size_t  max_iterations,
ResidualType  tol,
size_t &  iterations,
ResidualType &  residual,
grb::Vector< IOType > &  r,
grb::Vector< IOType > &  u,
grb::Vector< IOType > &  temp,
const Ring &  ring = Ring(),
const Minus &  minus = Minus(),
const Divide &  divide = Divide() 
)

Solves a linear system \( b = Ax \) with \( x \) unknown by the Conjugate Gradients (CG) method on general fields.

Does not perform any preconditioning.

Template Parameters
descrThe user descriptor
IOTypeThe input/output vector nonzero type
ResidualTypeThe type of the residual
NonzeroTypeThe matrix nonzero type
InputTypeThe right-hand side vector nonzero type
RingThe semiring under which to perform CG
MinusThe minus operator corresponding to the inverse of the additive operator of the given Ring.
DivideThe division operator corresponding to the inverse of the multiplicative operator of the given Ring.

Valid descriptors to this algorithm are:

  1. descriptors::no_casting
  2. descriptors::transpose

By default, i.e., if none of ring, minus, or divide (nor their types) are explicitly provided by the user, the natural field on double data types will be assumed.

Note
An abstraction of a field that encapsulates Ring, Minus, and Divide may be more appropriate. This will also naturally ensure that demands on domain types are met.
Parameters
[in,out]xOn input: an initial guess to the solution. On output: the last computed approximation.
[in]AThe (square) positive semi-definite system matrix.
[in]bThe known right-hand side in \( Ax = b \). Must be structurally dense.

If A is \( n \times n \), then x and b must have matching length \( n \). The vector x furthermore must have a capacity of \( n \).

CG algorithm inputs:

Parameters
[in]max_iterationsThe maximum number of CG iterations.
[in]tolThe requested relative tolerance.

Additional outputs (besides x):

Parameters
[out]iterationsThe number of iterations the algorithm has started.
[out]residualThe residual corresponding to output x.

The CG algorithm requires three workspace buffers with capacity \( n \):

Parameters
[in,out]rA temporary vector of the same size as x.
[in,out]uA temporary vector of the same size as x.
[in,out]tempA temporary vector of the same size as x.

Finally, the algebraic structures over which the CG is executed are given:

Parameters
[in]ringThe semiring under which to perform the CG.
[in]minusThe inverse of the additive operator of ring.
[in]divideThe inverse of the multiplicative operator of ring.

This algorithm may return one of the following error codes:

Returns
grb::SUCCESS When the algorithm has converged to a solution within the given max_iterations and tol.
grb::FAILED When the algorithm did not converge within the given max_iterations.
grb::ILLEGAL When A is not square.
grb::MISMATCH When x or b does not match the size of A.
grb::ILLEGAL When x does not have capacity \( n \).
grb::ILLEGAL When at least one of the workspace vectors does not have capacity \( n \).
grb::ILLEGAL If tol is not strictly positive.
grb::PANIC If an unrecoverable error has been encountered. The output as well as the state of ALP/GraphBLAS is undefined.

On output, the contents of the workspace r, u, and temp are always undefined. For non-grb::SUCCESS error codes, additional containers or states may be left undefined:

  1. when grb::PANIC is returned, the entire program state, including the contents of all containers, become undefined;
  2. when grb::ILLEGAL or grb::MISMATCH are returned and iterations equals zero, then all outputs are left unmodified compared to their contents at function entry;
  3. when grb::ILLEGAL or grb::MISMATCH are returned and iterations is nonzero, then the contents of x are undefined.
Performance semantics
  1. This function does not allocate nor free dynamic memory, nor shall it make any system calls.

For performance semantics regarding work, inter-process data movement, intra-process data movement, synchronisations, and memory use, please see the specification of the ALP primitives this function relies on. These performance semantics, with the exception of getters such as grb::nnz, are specific to the backend selected during compilation.

◆ cosine_similarity()

RC cosine_similarity ( OutputType &  similarity,
const Vector< InputType1 > &  x,
const Vector< InputType2 > &  y,
const Ring &  ring = Ring(),
const Division &  div = Division() 
)

Computes the cosine similarity.

Given two vectors \( x, y \) of equal size \( n \), this function computes \( \alpha = \frac{ (x,y) }{ ||x||_2\cdot||y||_2 } \).

The 2-norms and inner products are computed according to the given semi- ring. However, the norms make use of the standard sqrt and so the algorithm assumes a regular field is used. Effectively, hence, the semiring controls the precision / data types under which the computation is performed.

Template Parameters
descrThe descriptor under which to perform the computation.
OutputTypeThe type of the output element (scalar).
InputType1The type of the first vector.
InputType2The type of the second vector.
RingThe semiring used.
DivisionWhich binary operator correspond to division corresponding to the given Ring.
Parameters
[out]similarityWhere to fold the result into.
[in]xThe non-zero left-hand input vector.
[in]yThe non-zero right-hand input vector.
[in]ringThe semiring to compute over.
[in]divThe division operator corresponding to ring.
Note
The vectors x and/or y may be sparse or dense.

The argument div is optional. It will map to grb::operators::divide by default.

Returns
SUCCESS If the computation was successful.
MISMATCH If the vector sizes do not match. The output similarity is undefined.
ILLEGAL In case x is all zero, and/or when y is all zero. The output similarity is undefined.
PANIC If an unrecoverable error has been encountered. The output as well as the state of ALP/GraphBLAS is undefined.
Performance semantics
  1. This function does not allocate nor free dynamic memory, nor shall it make any system calls.

For performance semantics regarding work, inter-process data movement, intra-process data movement, synchronisations, and memory use, please see the specification of the ALP primitives this function relies on. These performance semantics, with the exception of getters such as grb::nnz, are specific to the backend selected during compilation.

◆ kmeans_iteration()

RC kmeans_iteration ( Matrix< IOType > &  K,
Vector< std::pair< size_t, IOType > > &  clusters_and_distances,
const Matrix< IOType > &  X,
const size_t  max_iter = 1000,
const Operator &  dist_op = Operator() 
)

The kmeans iteration given an initialisation.

Parameters
[in,out]Kk by m matrix containing the current k means as row vectors
[in]clusters_and_distancesVector containing the class and distance to centroid for each point
[in]Xm by n matrix containing the n points to be classified as column vectors
[in]max_iterMaximum number of iterations
[in]dist_opCoordinatewise distance operator, squared difference by default

◆ knn()

RC knn ( Vector< OutputType > &  u,
const Matrix< InputType > &  A,
const size_t  source,
const size_t  k,
Vector< bool > &  buf1 
)

Given a graph and a source vertex, indicates which vertices are contained within k hops.

This implementation is based on the matrix powers kernel over a Boolean semiring.

Parameters
[out]uThe distance-k neighbourhood. Any prior contents will be ignored.
[in]AThe input graph in (square) matrix form
[in]sourceThe source vertex index.
[in]kThe neighbourhood distance, or the maximum number of hops in a breadth-first search.

This algorithm requires the following workspace:

Parameters
[in,out]buf1A buffer vector. Must match the size of A.

For \( n \times n \) matrices A, the capacity of u, buf1, and buf2 must equal \( n \).

Returns
grb::SUCCESS When the computation completes successfully.
grb::MISMATCH When the dimensions of u do not match that of A.
grb::MISMATCH If source is not in range of A.
grb::ILLEGAL If one or more of u, buf1, or buf2 has insufficient capacity.
grb::PANIC If an unrecoverable error has been encountered. The output as well as the state of ALP/GraphBLAS is undefined.
Performance semantics
  1. This function does not allocate nor free dynamic memory, nor shall it make any system calls.

For performance semantics regarding work, inter-process data movement, intra-process data movement, synchronisations, and memory use, please see the specification of the ALP primitives this function relies on. These performance semantics, with the exception of getters such as grb::nnz, are specific to the backend selected during compilation.

◆ kpp_initialisation()

RC kpp_initialisation ( Matrix< IOType > &  K,
const Matrix< IOType > &  X,
const Operator &  dist_op = Operator() 
)

a simple implementation of the k++ initialisation algorithm for kmeans

Parameters
[in,out]Kk by m matrix containing the current k means as row vectors
[in]Xm by n matrix containing the n points to be classified as column vectors
[in]dist_opCoordinatewise distance operator, squared difference by default

◆ label()

RC label ( Vector< IOType > &  out,
const Vector< IOType > &  y,
const Matrix< IOType > &  W,
const size_t  n,
const size_t  l,
const size_t  maxIterations = 1000 
)

The label propagation algorithm.

Template Parameters
IOTypeThe value type of the label vector. This will determine the precision of all computations this algorithm performs.
Parameters
[out]outThe resulting labelled vector representing the n vertices.
[in]yVector holding the initial labels from a total set of n vertices. The initial labels are assumed to correspond to the vertices corresponding to the first l entries of this vector. The labels must be either 0 ar 1.
[in]WSparse symmetric matrix of size n by n, holding the weights between the n vertices. The weights must be positive (larger than 0). The matrix may be defective while the corresponding graph may not be connected.
[in]nThe total number of vertices. If zero, and all of out, y, and W are empty, calling this function is equivalent to a no-op.
[in]lThe number of vertices with an initial label. Must be larger than zero.
[in]maxIterationsThe maximum number of iterations this algorithm may execute. Optional. Default value: 1000.
Note
If the underlying graph is not connected then some components may be rendered immutable by this algorithm.
Returns
grb::ILLEGAL If one of the arguments passed to this function is illegal. The output will be left unmodified.
grb::ILLEGAL The vector out did not have full capacity available. The output will be left unmodified.
grb::ILLEGAL If n was nonzero but l was zero.
grb::SUCCESS If the computation converged within max iterations, or if n was zero.
grb::FAILED If the method did not converge within max iterations. The output will contain the latest iterand.
grb::PANIC If an unrecoverable error has been encountered. The output as well as the state of ALP/GraphBLAS is undefined.

[1] Kamvar, Haveliwala, Manning, Golub; ‘Extrapolation methods for accelerating the PageRank computation’, ACM Press, 2003.

◆ mpv()

RC mpv ( Vector< IOType > &  u,
const Matrix< InputType > &  A,
const size_t  k,
const Vector< IOType > &  v,
Vector< IOType > &  temp,
const Ring &  ring 
)

The matrix powers kernel.

Calculates \( y = A^k x \) for some integer \( k \geq 0 \) using the given semiring.

Template Parameters
descrThe descriptor used to perform this operation.
RingThe semiring used.
IOTypeThe output vector type.
InputTypeThe nonzero type of matrix elements.
implementationWhich implementation to use.
Parameters
[out]uThe output vector. Contents shall be overwritten. The supplied vector must match the row dimension size of A.
[in]AThe square input matrix A. The supplied matrix must match the dimensions of u and v.
[in]kHow many matrix–vector multiplications are requested.
[in]vThe input vector v. The supplied vector must match the column dimension size of A. It may not be the same vector as u.
[in]ringThe semiring to be used. This defines the additive and multiplicative monoids to be used.

This algorithm requires workspace:

Parameters
[in,out]tempA workspace buffer of matching size to the row dimension of A. May be the same vector as v, though note that the contents of temp on output are undefined.

This algorithm assumes that u and temp have full capacity. If this assumption does not hold, then a two-stage mpv must be employed instead.

Returns
grb::SUCCESS If the computation completed successfully.
grb::ILLEGAL If A is not square.
grb::MISMATCH If one or more of u, v, or temp has an incompatible size with A.
grb::ILLEGAL If one or more of u or temp does not have a full capacity.
grb::PANIC If an unrecoverable error has been encountered. The output as well as the state of ALP/GraphBLAS is undefined.
grb::OVERLAP If one or more of v or temp is the same vector as u.
Performance semantics
  1. This function does not allocate nor free dynamic memory, nor shall it make any system calls.

For performance semantics regarding work, inter-process data movement, intra-process data movement, synchronisations, and memory use, please see the specification of the ALP primitives this function relies on. These performance semantics, with the exception of getters such as grb::nnz, are specific to the backend selected during compilation.

◆ norm2()

RC norm2 ( OutputType &  x,
const Vector< InputType, backend, Coords > &  y,
const Ring &  ring = Ring(),
const typename std::enable_if< std::is_floating_point< OutputType >::value, void >::type * const  = nullptr 
)

Provides a generic implementation of the 2-norm computation.

Proceeds by computing a dot-product on itself and then taking the square root of the result.

This function is only available when the output type is floating point.

For return codes, exception behaviour, performance semantics, template and non-template arguments,

See also
grb::dot.
Parameters
[out]xThe 2-norm of y. The input value of x will be ignored.
[in]yThe vector to compute the norm of.
[in]ringThe Semiring under which the 2-norm is to be computed.
Warning
This function computes x out-of-place. This is contrary to standard ALP/GraphBLAS functions that are always in-place.
A ring is not sufficient for computing a two-norm. This implementation assumes the standard sqrt function must be applied on the result of a dot-product of y with itself under the supplied semiring.

◆ simple_pagerank()

RC simple_pagerank ( Vector< IOType > &  pr,
const Matrix< NonzeroT > &  L,
Vector< IOType > &  pr_next,
Vector< IOType > &  pr_nextnext,
Vector< IOType > &  row_sum,
const IOType  alpha = 0.85,
const IOType  conv = 0.0000001,
const size_t  max = 1000,
size_t *const  iterations = nullptr,
double *const  quality = nullptr 
)

The canonical PageRank algorithm.

Template Parameters
descrThe descriptor under which to perform the computation.
IOTypeThe value type of the pagerank vector. This will determine the precision of all computations this algorithm performs.
NonzeroTThe type of the elements of the nonzero matrix.
Parameters
[in,out]prVector of size and capacity \( n \), where \( n \) is the vertex size of the input graph L. On input, the contents of this vector will be taken as the initial guess to the final result, but only if the vector is dense; if it is, all entries of the initial guess must be nonzero, while it is not, this algorithm will make an initial guess. On output, if grb::SUCCESS is returned, the PageRank vector corresponding to L.
[in]LThe input graph as a square link matrix of size \( n \).

To operate, this algorithm requires a workspace of three vectors. The size and capacities of these must equal \( n \). The contents on input are ignored, and the contents on output are undefined.

This algorithm does not explicitly materialise the Google matrix \( G = \alpha L + (1-\alpha)ee^T \) over which the power iterations are exectuted.

Parameters
[in,out]pr_nextBuffer for the PageRank algorithm.
[in,out]pr_nextnextBuffer for the PageRank algorithm.
[in,out]row_sumBuffer for the PageRank algorithm.

The PageRank algorithm holds the following optional parameters:

Parameters
[in]alphaThe scaling factor. The default value is 0.85. This value must be smaller than 1, and larger than 0.
[in]convIf the difference between two successive iterations, in terms of its one-norm, is less than this value, then the PageRank vector is considered converged and this algorithm exits successfully. The default value is \( 10^{-8} \). If this value is set to zero, then the algorithm will continue until max iterations are reached. May not be negative.
[in]maxThe maximum number of power iterations. The default value is 1000. This value must be larger than 0.

The PageRank algorithm reports the following optional output:

Parameters
[out]iterationsIf not nullptr, the number of iterations the call to this algorithm took will be written to the location pointed to.
[out]qualityIf not nullptr, the last computed residual will be written to the location pointed to.
Returns
grb::SUCCESS If the computation converged within max iterations.
grb::ILLEGAL If L is not square. All outputs are left untouched.
grb::MISMATCH If the dimensions of pr and L do not match. All outputs are left untouched.
grb::ILLEGAL If an invalid value for alpha, conv, or max was given. All outputs are left untouched.
grb::ILLEGAL If the capacity of one or more of pr, pr_next, pr_nextnext, or row_sum is less than \( n \). All outputs are left untouched.
grb::FAILED If the PageRank method did not converge to within the given tolerance conv within the given maximum number of iterations max. The output pr is the last computed approximation, while iterations and quality are likewise set to max and the last computed residual, respectively.
grb::PANIC If an unrecoverable error has been encountered. The output as well as the state of ALP/GraphBLAS is undefined.
Performance semantics
  1. This function does not allocate nor free dynamic memory, nor shall it make any system calls.

For performance semantics regarding work, inter-process data movement, intra-process data movement, synchronisations, and memory use, please see the specification of the ALP primitives this function relies on. These performance semantics, with the exception of getters such as grb::nnz, are specific to the backend selected during compilation.

◆ sparse_nn_single_inference() [1/2]

grb::RC sparse_nn_single_inference ( grb::Vector< IOType > &  out,
const grb::Vector< IOType > &  in,
const std::vector< grb::Matrix< WeightType > > &  layers,
const std::vector< BiasType > &  biases,
const ThresholdType  threshold,
grb::Vector< IOType > &  temp,
const ReluMonoid &  relu = ReluMonoid(),
const MinMonoid &  min = MinMonoid(),
const Ring &  ring = Ring() 
)

Performs an inference step of a single data element through a Sparse Neural Network defined by num_layers sparse weight matrices and num_layers biases.

The initial single data element may be sparse also, such as common in Graph Neural Networks (GNNs).

Inference here is a repeated sequence of application of a sparse linear layer, addition of a bias factor, and the application of a ReLU.

We employ a linear algebraic formulation where the ReLU and the bias application are jointly applied via a max-operator.

This formalism follows closely the linear algebraic approach to the related IEEE/MIT GraphChallenge problem, such as for example described in

Combinatorial Tiling for Sparse Neural Networks F. Pawlowski, R. H. Bisseling, B. Uçar and A. N. Yzelman 2020 IEEE High Performance Extreme Computing (HPEC) Conference

Parameters
[out]outThe result of inference through the neural network.
[in]inThe input vector, may be sparse or dense.
[in]layersA collection of linear layers. Each layer is assumed to be square and of the equal size to one another.

This implies that all layers are \( n \times n \). The vectors in and out hence must be of length \( n \).

Commonly, as an input propagates through a network, the features become increasingly dense. Hence out is assumed to have full capacity in order to potentially store a fully dense activation vector.

Inference proceeds under a set of biases, one for each layer. Activation vectors are added a constant bias value prior to applying the given relu function. After application, the resulting vector is furthermore tresholded. The treshold is assumed constant over all layers.

Parameters
[in]biasesAn array of num_layers bias factors.
[in]thresholdThe value used for thresholding.

Inference is done using a single buffer that is alternated with out:

Parameters
[in,out]tempA buffer of size and capacity \( n \).

Finally, optional arguments define the algebraic structures under which inference proceeds:

Parameters
[in]reluThe non-linear ReLU function to apply element-wise.
[in]minOperator used for thresholding. Maximum feature value is hard-coded to 32, as per the GraphChallenge.
[in]ringThe semiring under which to perform the inference.

The default algebraic structures are standard relu (i.e., max), min for tresholding, and the real (semi-) ring.

Valid descriptors for this algorithm are:

  1. descriptor::no_casting
Note
This algorithm applies the propagation through layers in-place. To facilitate this, only square layers are allowed. Non-square layers would require the use of different vectors at every layer.
Thresholding here means that feature maps as propagated through the neural network are capped at some maximum value, threshold.
Returns
grb::SUCCESS If the inference was successful
grb::ILLEGAL If the size of layers does not match that of baises.
grb::MISMATCH If at least one pair of dimensions between layers, in, out, and temp do not match.
grb::ILLEGAL If at least one layer was not square.
grb::ILLEGAL If the capacities of one or more of out and temp were not full.
Performance semantics
  1. This function does not allocate nor free dynamic memory, nor shall it make any system calls.

For performance semantics regarding work, inter-process data movement, intra-process data movement, synchronisations, and memory use, please see the specification of the ALP primitives this function relies on. These performance semantics, with the exception of getters such as grb::nnz, are specific to the backend selected during compilation.

◆ sparse_nn_single_inference() [2/2]

grb::RC sparse_nn_single_inference ( grb::Vector< IOType > &  out,
const grb::Vector< IOType > &  in,
const std::vector< grb::Matrix< WeightType > > &  layers,
const std::vector< BiasType > &  biases,
grb::Vector< IOType > &  temp,
const ReluMonoid &  relu = ReluMonoid(),
const Ring &  ring = Ring() 
)

Performs an inference step of a single data element through a Sparse Neural Network defined by num_layers sparse weight matrices and num_layers biases.

The initial single data element may be sparse also, such as common in Graph Neural Networks (GNNs).

Inference here is a repeated sequence of application of a sparse linear layer, addition of a bias factor, and the application of a ReLU.

We employ a linear algebraic formulation where the ReLU and the bias application are jointly applied via a max-operator.

This formalism follows closely the linear algebraic approach to the related IEEE/MIT GraphChallenge problem, such as, for example, described in

Combinatorial Tiling for Sparse Neural Networks F. Pawlowski, R. H. Bisseling, B. Uçar and A. N. Yzelman 2020 IEEE High Performance Extreme Computing (HPEC) Conference

Parameters
[out]outThe result of inference through the neural network.
[in]inThe input vector, may be sparse or dense.
[in]layersA collection of linear layers. Each layer is assumed to be square and of the equal size to one another.

This implies that all layers are \( n \times n \). The vectors in and out hence must be of length \( n \).

Commonly, as an input propagates through a network, the features become increasingly dense. Hence out is assumed to have full capacity in order to potentially store a fully dense activation vector.

Inference proceeds under a set of biases, one for each layer. Activation vectors are added a constant bias value prior to applying the given relu function. This function does not perform tresholding.

Parameters
[in]biasesAn array of num_layers bias factors.

Inference is done using a single buffer that is alternated with out:

Parameters
[in,out]tempA buffer of size and capacity \( n \).

Finally, optional arguments define the algebraic structures under which inference proceeds:

Parameters
[in]reluThe non-linear ReLU function to apply element-wise.
[in]ringThe semiring under which to perform the inference.

The default algebraic structures are standard relu (i.e., max), min for tresholding, and the real (semi-) ring.

Valid descriptors for this algorithm are:

  1. descriptor::no_casting
Note
This algorithm applies the propagation through layers in-place. To facilitate this, only square layers are allowed. Non-square layers would require the use of different vectors at every layer.
Returns
grb::SUCCESS If the inference was successful
grb::ILLEGAL If the size of layers does not match that of baises.
grb::MISMATCH If at least one pair of dimensions between layers, in, out, and temp do not match.
grb::ILLEGAL If at least one layer was not square.
grb::ILLEGAL If the capacities of one or more of out and temp were not full.
Performance semantics
  1. This function does not allocate nor free dynamic memory, nor shall it make any system calls.

For performance semantics regarding work, inter-process data movement, intra-process data movement, synchronisations, and memory use, please see the specification of the ALP primitives this function relies on. These performance semantics, with the exception of getters such as grb::nnz, are specific to the backend selected during compilation.

◆ spy() [1/3]

RC spy ( grb::Matrix< IOType > &  out,
const grb::Matrix< bool > &  in 
)

Specialisation for boolean input matrices in.

See grb::algorithms::spy.

◆ spy() [2/3]

RC spy ( grb::Matrix< IOType > &  out,
const grb::Matrix< InputType > &  in 
)

Given an input matrix and a smaller output matrix, map nonzeroes from the input matrix into the smaller one and count the number of nonzeroes that are mapped from the bigger matrix into the smaller.

Template Parameters
normalizeIf set to true, will not compute a number of mapped nonzeroes, but its inverse instead (one divided by the count). The default value for this template parameter is false.
Parameters
[out]outThe smaller output matrix.
[in]inThe larger input matrix.
Returns
SUCCESS If the computation completes successfully.
ILLEGAL If out has a number of rows or columns larger than that of in.
Warning
Explicit zeroes (that when cast from InputType to bool read false) will be counted as a nonzero by this algorithm.
Note
To not count explicit zeroes, pre-process the input matrix in, for example, as follows: grb::set( tmp, in, true );, with tmp a Boolean or pattern matrix of the same size as in.
Performance semantics
Warning
This algorithm does NOT request workspace buffers since due to the use of level-3 primitives it will have to allocate anyway– as such, this algorithm does not have clear performance semantics and should be used with care.

For performance semantics regarding work, inter-process data movement, intra-process data movement, synchronisations, and memory use, please see the specification of the ALP primitives this function relies on. These performance semantics, with the exception of getters such as grb::nnz, are specific to the backend selected during compilation.

◆ spy() [3/3]

RC spy ( grb::Matrix< IOType > &  out,
const grb::Matrix< void > &  in 
)

Specialisation for void input matrices in.

See grb::algorithms::spy.