Home
last modified time | relevance | path

Searched refs:MKL (Results 1 – 25 of 37) sorted by relevance

12

/external/tensorflow/third_party/mkl/
Dbuild_defs.bzl1 """Skylark macros for MKL.
3 if_mkl is a conditional to check if we are building with MKL.
4 if_mkl_ml is a conditional to check if we are building with MKL-ML.
5 if_mkl_ml_only is a conditional to check for MKL-ML-only (no MKL-DNN) mode.
6 if_mkl_lnx_x64 is a conditional to check for MKL
7 if_enable_mkl is a conditional to check if building with MKL and MKL is enabled.
9 mkl_repository is a repository rule for creating MKL repository rule that can
18 """Shorthand for select()'ing on whether we're building with MKL.
21 if_true: expression to evaluate if building with MKL.
22 if_false: expression to evaluate if building without MKL.
[all …]
/external/tensorflow/third_party/mkl_dnn/
Dbuild_defs.bzl2 """Returns `if_true` if MKL-DNN v0.x is used.
5 MKL-DNN v0.x open source library only, without depending on MKL binary form.
8 with MKL-DNN v0.x open source library only. Otherwise, the select statement
18 """Returns `if_true` if MKL-DNN v1.x is used.
21MKL-DNN v1.x open source library only with user specified threadpool, without depending on MKL bin…
24 with MKL-DNN v1.x open source library only with user specified threadpool. Otherwise, the
Dmkldnn.BUILD26 # Create the file mkldnn_version.h with MKL-DNN version numbers.
27 # Currently, the version numbers are hard coded here. If MKL-DNN is upgraded then
33 # TODO(bhavanis): MKL-DNN minor version needs to be updated for MKL-DNN v1.x.
34 # The current version numbers will work only if MKL-DNN v0.21 is used.
/external/eigen/doc/
DUsingIntelMKL.dox29 * Content : Documentation on the use of Intel MKL through Eigen
35 /** \page TopicUsingIntelMKL Using Intel® MKL from %Eigen
37 <!-- \section TopicUsingIntelMKL_Intro Eigen and Intel® Math Kernel Library (Intel® MKL) -->
39 …enefit from built-in Intel® Math Kernel Library (MKL) optimizations with an installed copy of Inte…
41 <a href="http://eigen.tuxfamily.org/Counter/redirect_to_mkl.php"> Intel MKL </a> provides highly op…
42 Intel MKL is available on Linux, Mac and Windows for both Intel64 and IA32 architectures.
45 Intel® MKL is a proprietary software and it is the responsibility of users to buy or register for c…
47 Using Intel MKL through %Eigen is easy:
49 …# link your program to MKL libraries (see the <a href="http://software.intel.com/en-us/articles/in…
52 When doing so, a number of %Eigen's algorithms are silently substituted with calls to Intel MKL rou…
[all …]
DOverview.dox22 … such as preprocessor directives, controlling assertions, multi-threading, MKL support, some Eigen…
DUsingBlasLapackBackends.dox39 …se <a href="http://eigen.tuxfamily.org/Counter/redirect_to_mkl.php">Intel® MKL</a>, Apple's Accele…
41 …telMKL page \endlink for further discussions on the specific use of Intel® MKL (also includes VML,…
DSparseLinearSystems.dox80 …<td>Requires the <a href="http://eigen.tuxfamily.org/Counter/redirect_to_mkl.php">Intel MKL</a> pa…
81 …<td>optimized for tough problems patterns, see also \link TopicUsingIntelMKL using MKL with Eigen …
/external/eigen/Eigen/
DPardisoSupport20 * This module brings support for the Intel(R) MKL PARDISO direct sparse solvers.
26 …der to use this module, the MKL headers must be accessible from the include paths, and your binary…
27 * See this \ref TopicUsingIntelMKL "page" for more information on MKL-Eigen integration.
DCore82 // this include file manages BLAS and MKL related macros
/external/tensorflow/tensorflow/core/grappler/optimizers/
Dauto_mixed_precision.h25 enum class AutoMixedPrecisionMode { CUDA, MKL }; enumerator
Dmeta_optimizer.cc202 new AutoMixedPrecision(AutoMixedPrecisionMode::MKL)); in MakeNewOptimizer()
269 MakeUnique<AutoMixedPrecision>(AutoMixedPrecisionMode::MKL)); in InitializeOptimizers()
Dauto_mixed_precision.cc954 case AutoMixedPrecisionMode::MKL: in get_mixed_precision_lists()
1258 if (force_all_fp16_ && mode_ == AutoMixedPrecisionMode::MKL) { in Optimize()
1287 case AutoMixedPrecisionMode::MKL: in Optimize()
1990 if (mode_ == AutoMixedPrecisionMode::MKL) { in Optimize()
Dauto_mixed_precision_test.cc1209 AutoMixedPrecision optimizer{AutoMixedPrecisionMode::MKL}; in TEST_F()
1252 AutoMixedPrecision optimizer{AutoMixedPrecisionMode::MKL}; in TEST_F()
1323 AutoMixedPrecision optimizer{AutoMixedPrecisionMode::MKL}; in TEST_F()
/external/eigen/bench/btl/cmake/
DFindMKL.cmake63 find_package_handle_standard_args(MKL DEFAULT_MSG MKL_LIBRARIES)
/external/eigen/bench/btl/libs/BLAS/
DCMakeLists.txt11 find_package(MKL)
/external/eigen/bench/btl/
DCMakeLists.txt40 # find_package(MKL)
/external/speex/
Dconfigure.ac261 AC_MSG_CHECKING(for valid MKL)
270 …[AC_MSG_FAILURE([Failed to compile MKL test program. Make sure you set CFLAGS to include the inclu…
/external/eigen/cmake/
DFindBLASEXT.cmake102 "\n have also potentially detected some multithreaded BLAS libraries from the MKL."
314 message(STATUS "BLAS found is Intel MKL:"
DFindBLAS.cmake42 # For MKL case and if no paths are given as hints, we will try to use the MKLROOT
693 message(STATUS "Looking for MKL BLAS: found")
695 message(STATUS "Looking for MKL BLAS: not found")
699 set (BLAS_VENDOR_FOUND "Intel MKL")
/external/tensorflow/tensorflow/core/protobuf/
Drewriter_config.proto103 // Optimize data types for MKL (default is OFF).
/external/pffft/
DREADME.md44 competitive with the fastest ones, such as FFTW, Intel MKL, AMD ACML,
/external/tensorflow/tensorflow/compiler/tf2xla/
DBUILD238 # Note, we specifically removed MKL and multithreaded dependencies so the
239 # standalone does not require the MKL binary blob or threading libraries.
/external/eigen/unsupported/Eigen/
DFFT35 …* - MKL (http://en.wikipedia.org/wiki/Math_Kernel_Library) : fastest, commercial -- may be incompa…
/external/tensorflow/
D.bazelrc163 # Please note that MKL on MacOS or windows is still not supported.
164 # If you would like to use a local MKL instead of downloading, please set the
/external/tensorflow/tensorflow/compiler/xla/
Dxla.proto115 // Generate calls to MKL-DNN in the CPU backend.

12