Lapack Library
From AbInitio
- As result, though the library builds happily and many functions can be used, other functions crash with unresolved symbols from lapack. Since it seems possible, at least in C, to pull a static library into a shared library, I tried that, by in CMakeLists.txt putting TARGETLINKLIBRARIES('NameOfOurSharedLib' mklblas95ilp64 mkllapack95ilp64).
- LAPACK is a library of Fortran 77 routines for solving the most common problems in numerical linear algebra. It has been designed to be efficient on a wide range of modern high performance computer and and can be downloaded for free from the netlib archives.
The C version of LAPACK, CLAPACK, which is produced automatically from the Fortran sources by the f2c translator, is also available, allowing C code to be linked like this: cc -o ccode ccode.c -L/usr/local/lib -lclapack -lcblas -lctmg -lf2c. The -lctmg library may not be needed. Although the code is in C, the internals behave like Fortran with.
BLAS
The first thing you must have on your system is a BLAS implementation. 'BLAS' stands for 'Basic Linear Algebra Subroutines,' and is a standard interface for operations like matrix multiplication. It is designed as a building-block for other linear-algebra applications, and is used both directly by our code and in LAPACK (see below). By using it, we can take advantage of many highly-optimized implementations of these operations that have been written to the BLAS interface. (Note that you will need implementations of BLAS levels 1-3.)
You can find more BLAS information, as well as a basic implementation, on the BLAS Homepage. Once you get things working with the basic BLAS implementation, it might be a good idea to try and find a more optimized BLAS code for your hardware. Vendor-optimized BLAS implementations are available as part of the Intel MKL, HP CXML, IBM ESSL, SGI sgimath, and other libraries. An excellent, high-performance, free-software BLAS implementation is OpenBLAS; another is ATLAS.
Note that the generic BLAS does not come with a Makefile
; compile it with something like:
(Replace -O3
with your favorite optimization options. On Linux, I use g77 -O3 -fomit-frame-pointer -funroll-loops
, with -malign-double -mcpu=i686
on a Pentium II.) Note that MPB looks for the standard BLAS library with -lblas
, so the library file should be called libblas.a
and reside in a standard directory like /usr/local/lib
. (See also below for the --with-blas=lib
option to MPB's configure
script, to manually specify a library location.)
LAPACK
LAPACK, the Linear Algebra PACKage, is a standard collection of routines, built on BLAS, for more-complicated (dense) linear algebra operations like matrix inversion and diagonalization. You can download LAPACK from the LAPACK Home Page.
Note that our software looks for LAPACK by linking with -llapack
. This means that the library must be called liblapack.a
and be installed in a standard directory like /usr/local/lib
(alternatively, you can specify another directory via the LDFLAGS
environment variable as described earlier). (See also below for the --with-lapack=lib
option to our configure
script, to manually specify a library location.)
I currently recommend installing OpenBLAS, which includes LAPACK so you do not need to install it separately.
Razvan Carbunescu
EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2014-224
December 18, 2014
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-224.pdf
Lapack Library Python
Many applications call linear algebra libraries as methods of achieving better performance and reliability. LAPACK (Linear Algebra Package) is a standard software library for numerical linear algebra that is widely used in the industrial and scientific community. LAPACK functions require the user to know the sparsity and other mathematical structure of their inputs to be able to take advantage of the fastest codes: General Matrix (GE), General Band (GB), Positive Definite (PO) etc. If a user is unsure of their matrix structure or cannot easily express it in the formats available (profile matrices, arrow matrices etc.) they are forced to use a more general structure, which includes their input, and so run less efficiently than possible. The goal of this thesis is to allow for automatic sparsity detection (ASD) within LAPACK that is completely hidden from the user and provides no slowdown for users running fully dense matrices. This work adds modular support for the detection of blocked sparsity within LAPACK LU and Cholesky functions. It also creates the infrastructure and the algorithms to potentially expand sparsity detection to other factorizations, more input matrix structures, or provide further timing and memory improvements via integration directly in the solver routines. Two general approaches are implemented named `Profile' (ASD1) and `Sparse block' (ASD2) with a third more complicated method named `Full sparsity' (ASD3) being described more abstractly, only at an algorithm level. With these algorithms we obtain benefits of up to an order of magnitude (35.10x faster over the same LAPACK function) for matrices displaying `blocked sparsity' patterns and large benefits over the best LAPACK algorithms for patterns that don't fit into LAPACK categories (4.85x faster over the best LAPACK function). For matrices exhibiting no sparsity these implementations incur either a negligible penalty (an overhead of 1%) or incur a small overhead (10-30%) that quickly decreases with the size of matrix n or band b (less than 5% for n,b > 500).
See More Results
Advisor: James Demmel
BibTeX citation:
Lapack Library Not Found
EndNote citation: