# Download Matrix and Tensor Decomposition Ebook PDF

**Matrix and Tensor Factorization Techniques for Recommender Systems**

A Book

#### by **Panagiotis Symeonidis,Andreas Zioupos**

- Publisher : Springer
- Release : 2016-09-25
- Pages : 102
- ISBN : 9783319413563
- Language : En, Es, Fr & De

This book presents the algorithms used to provide recommendations by exploiting matrix factorization and tensor decomposition techniques. It highlights well-known decomposition methods for recommender systems, such as Singular Value Decomposition (SVD), UV-decomposition, Non-negative Matrix Factorization (NMF), etc. and describes in detail the pros and cons of each method for matrices and tensors. This book provides a detailed theoretical mathematical background of matrix/tensor factorization techniques and a step-by-step analysis of each method on the basis of an integrated toy example that runs throughout all its chapters and helps the reader to understand the key differences among methods. It also contains two chapters, where different matrix and tensor methods are compared experimentally on real data sets, such as Epinions, GeoSocialRec, Last.fm, BibSonomy, etc. and provides further insights into the advantages and disadvantages of each method. The book offers a rich blend of theory and practice, making it suitable for students, researchers and practitioners interested in both recommenders and factorization methods. Lecturers can also use it for classes on data mining, recommender systems and dimensionality reduction methods.

**Matrix and Tensor Decompositions in Signal Processing**

A Book

#### by **Gérard Favier**

- Publisher : Wiley-ISTE
- Release : 2021-09-15
- Pages : 200
- ISBN : 9781786301550
- Language : En, Es, Fr & De

The second volume will deal with a presentation of the main matrix and tensor decompositions and their properties of uniqueness, as well as very useful tensor networks for the analysis of massive data. Parametric estimation algorithms will be presented for the identification of the main tensor decompositions. After a brief historical review of the compressed sampling methods, an overview of the main methods of retrieving matrices and tensors with missing data will be performed under the low rank hypothesis. Illustrative examples will be provided.

**Nonnegative Matrix and Tensor Factorizations**

Applications to Exploratory Multi-way Data Analysis and Blind Source Separation

#### by **Andrzej Cichocki,Rafal Zdunek,Anh Huy Phan,Shun-ichi Amari**

- Publisher : John Wiley & Sons
- Release : 2009-07-10
- Pages : 500
- ISBN : 9780470747285
- Language : En, Es, Fr & De

This book provides a broad survey of models and efficient algorithms for Nonnegative Matrix Factorization (NMF). This includes NMF’s various extensions and modifications, especially Nonnegative Tensor Factorizations (NTF) and Nonnegative Tucker Decompositions (NTD). NMF/NTF and their extensions are increasingly used as tools in signal and image processing, and data analysis, having garnered interest due to their capability to provide new insights and relevant information about the complex latent relationships in experimental data sets. It is suggested that NMF can provide meaningful components with physical interpretations; for example, in bioinformatics, NMF and its extensions have been successfully applied to gene expression, sequence analysis, the functional characterization of genes, clustering and text mining. As such, the authors focus on the algorithms that are most useful in practice, looking at the fastest, most robust, and suitable for large-scale models. Key features: Acts as a single source reference guide to NMF, collating information that is widely dispersed in current literature, including the authors’ own recently developed techniques in the subject area. Uses generalized cost functions such as Bregman, Alpha and Beta divergences, to present practical implementations of several types of robust algorithms, in particular Multiplicative, Alternating Least Squares, Projected Gradient and Quasi Newton algorithms. Provides a comparative analysis of the different methods in order to identify approximation error and complexity. Includes pseudo codes and optimized MATLAB source codes for almost all algorithms presented in the book. The increasing interest in nonnegative matrix and tensor factorizations, as well as decompositions and sparse representation of data, will ensure that this book is essential reading for engineers, scientists, researchers, industry practitioners and graduate students across signal and image processing; neuroscience; data mining and data analysis; computer science; bioinformatics; speech processing; biomedical engineering; and multimedia.

**Nonnegative Matrix and Tensor Factorizations**

Applications to Exploratory Multi-way Data Analysis and Blind Source Separation

#### by **Andrzej Cichocki,Rafal Zdunek,Anh Huy Phan,Shun-ichi Amari**

- Publisher : Wiley
- Release : 2009-10-12
- Pages : 500
- ISBN : 9780470746660
- Language : En, Es, Fr & De

This book provides a broad survey of models and efficient algorithms for Nonnegative Matrix Factorization (NMF). This includes NMF’s various extensions and modifications, especially Nonnegative Tensor Factorizations (NTF) and Nonnegative Tucker Decompositions (NTD). NMF/NTF and their extensions are increasingly used as tools in signal and image processing, and data analysis, having garnered interest due to their capability to provide new insights and relevant information about the complex latent relationships in experimental data sets. It is suggested that NMF can provide meaningful components with physical interpretations; for example, in bioinformatics, NMF and its extensions have been successfully applied to gene expression, sequence analysis, the functional characterization of genes, clustering and text mining. As such, the authors focus on the algorithms that are most useful in practice, looking at the fastest, most robust, and suitable for large-scale models. Key features: Acts as a single source reference guide to NMF, collating information that is widely dispersed in current literature, including the authors’ own recently developed techniques in the subject area. Uses generalized cost functions such as Bregman, Alpha and Beta divergences, to present practical implementations of several types of robust algorithms, in particular Multiplicative, Alternating Least Squares, Projected Gradient and Quasi Newton algorithms. Provides a comparative analysis of the different methods in order to identify approximation error and complexity. Includes pseudo codes and optimized MATLAB source codes for almost all algorithms presented in the book. The increasing interest in nonnegative matrix and tensor factorizations, as well as decompositions and sparse representation of data, will ensure that this book is essential reading for engineers, scientists, researchers, industry practitioners and graduate students across signal and image processing; neuroscience; data mining and data analysis; computer science; bioinformatics; speech processing; biomedical engineering; and multimedia.

**Tensor Decomposition Meets Approximation Theory**

A Book

#### by **Ferre Knaepkens**

- Publisher : Unknown Publisher
- Release : 2017
- Pages : 329
- ISBN : 9876543210XXX
- Language : En, Es, Fr & De

This thesis studies three different subjects, namely tensors and tensor decomposition, sparse interpolation and Pad\'e or rational approximation theory. These problems find their origin in various fields within mathematics: on the one hand tensors originate from algebra and are of importance in computer science and knowledge technology, while on the other hand sparse interpolation and Pad\'e approximations stem from approximation theory. Although all three problems seem totally unrelated, they are deeply intertwined. The connection between them is exactly he goal of this thesis. These connections are of importance since they allow us to solve the symmetric tensor decomposition problem by means of a corresponding sparse interpolation problem or an appropriate Pad\'e approximant. The first section gives a short introduction on tensors. Here, starting from the points of view of matrices and vectors, a generalization is made to tensors. Also a link is made to other known concepts within matrix-algebra. Subsequently, three definitions of tensor rank are discussed. The first definition is the most general and is based on the decomposition by means of the outer product of vectors. The second definition is only applicable for symmetric tensors and is based on a decomposition by means of symmetric outer products of vectors. Finally, the last definition is also only applicable for symmetric tensors and is based o the decomposition of a related homogeneous polynomial. It can be shown that these last two definitions are equal and they are also the only definitions used in the continuation of the thesis. In particular, this last definition since it supplies the connection with approximation theory. Finally, a well-known method (ALS) to find these tensor decompositions is shortly discussed. However, ALS has some shortcomings en that is exactly the reason that the connections to approximation theory are of such importance. Sections two and three discuss the first problem of both within approximation theory, namely sparse interpolation. In the second section, The univariate problem is considered. This problem can be solved with Prony's method, which consists of finding the zeroes of a related polynomial or solving a generalized eigenvalue problem. The third section continues on the second since it discusses multivariate sparse interpolation. Prony's method for the univariate case is changed to also provide a solution for the multivariate problem. The fourth and fifth section have as subject Pad\'e or rational approximation theory. Like the name suggests, it consists of approximating a power series by a rational function. Section four first introduces univariate Pad\'e approximants and states some important properties of them. Here, shortly the connection is made with continued fraction to use this theory later on. Finally, some methods to find Pad\'e approximants are discussed, namely the Levinson algorithm, the determinant formulas and the qd-algorithm. Section five continues on section four and discusses multivariate Pad\'e approximation theory. It is shown that a shift of the univariate conditions occurs, however, despite this shift still a lot of the important properties of the univariate case remain true. Also an extension of the qd-algorithm for multivariate Pad\'e approximants is discussed. Section six bundles all previous sections to expose the connections between the three seemingly different problems. The discussion of these connections is done in two steps in the univariate case, first the tensor decomposition problem is rewritten as a sparse interpolation problem and subsequently, it is shown that the sparse interpolation problem can be solved by means of Pad\'e approximants. In the multivariate case, also the connection between tensor decomposition and sparse interpolation is discussed first. Subsequently, a parameterized approach is introduces, which converts the multivariate problem to a parameterized univariate problem such that the connections of the first part apply. This parameterized approach also lead to the connection between tensor decomposition, multivariate sparse interpolation and multivariate Pad\'e approximation theory. The last or seventh section consists of two examples, a univariate problem and a multivariate one. The techniques of previous sections are used to demonstrate the connections of section six. This section also serves as illustration of the methods of sections two until five to solve sparse interpolation and Pad\'e approximation problems.

**Matrix and Tensor Decomposition**

Application to Data Fusion and Analysis

#### by **Christian Jutten**

- Publisher : Unknown Publisher
- Release : 2021
- Pages : 329
- ISBN : 9780128157602
- Language : En, Es, Fr & De

**Spectral Learning on Matrices and Tensors**

A Book

#### by **Majid Janzamin,Rong Ge,Jean Kossaifi,Anima Anandkumar**

- Publisher : Unknown Publisher
- Release : 2019-11-25
- Pages : 156
- ISBN : 9781680836400
- Language : En, Es, Fr & De

The authors of this monograph survey recent progress in using spectral methods including matrix and tensor decomposition techniques to learn many popular latent variable models. With careful implementation, tensor-based methods can run efficiently in practice, and in many cases they are the only algorithms with provable guarantees on running time and sample complexity. The focus is on a special type of tensor decomposition called CP decomposition, and the authors cover a wide range of algorithms to find the components of such tensor decomposition. They also discuss the usefulness of this decomposition by reviewing several probabilistic models that can be learned using such tensor methods. The second half of the monograph looks at practical applications. This includes using Tensorly, an efficient tensor algebra software package, which has a simple python interface for expressing tensor operations. It also has a flexible back-end system supporting NumPy, PyTorch, TensorFlow, and MXNet. Spectral Learning on Matrices and Tensors provides a theoretical and practical introduction to designing and deploying spectral learning on both matrices and tensors. It is of interest for all students, researchers and practitioners working on modern day machine learning problems.

**Matrix and Tensor Factorization Techniques for Recommender Systems**

A Book

#### by **Panagiotis Symeonidis,Andreas Zioupos**

- Publisher : Springer
- Release : 2017-01-29
- Pages : 102
- ISBN : 3319413570
- Language : En, Es, Fr & De

This book presents the algorithms used to provide recommendations by exploiting matrix factorization and tensor decomposition techniques. It highlights well-known decomposition methods for recommender systems, such as Singular Value Decomposition (SVD), UV-decomposition, Non-negative Matrix Factorization (NMF), etc. and describes in detail the pros and cons of each method for matrices and tensors. This book provides a detailed theoretical mathematical background of matrix/tensor factorization techniques and a step-by-step analysis of each method on the basis of an integrated toy example that runs throughout all its chapters and helps the reader to understand the key differences among methods. It also contains two chapters, where different matrix and tensor methods are compared experimentally on real data sets, such as Epinions, GeoSocialRec, Last.fm, BibSonomy, etc. and provides further insights into the advantages and disadvantages of each method. The book offers a rich blend of theory and practice, making it suitable for students, researchers and practitioners interested in both recommenders and factorization methods. Lecturers can also use it for classes on data mining, recommender systems and dimensionality reduction methods.

**Matrix and Tensor Decompositions in Signal Processing**

A Book

#### by **Gérard Favier**

- Publisher : John Wiley & Sons
- Release : 2021-09-15
- Pages : 200
- ISBN : 1786301555
- Language : En, Es, Fr & De

The second volume will deal with a presentation of the main matrix and tensor decompositions and their properties of uniqueness, as well as very useful tensor networks for the analysis of massive data. Parametric estimation algorithms will be presented for the identification of the main tensor decompositions. After a brief historical review of the compressed sampling methods, an overview of the main methods of retrieving matrices and tensors with missing data will be performed under the low rank hypothesis. Illustrative examples will be provided.

**Scalable Low-rank Matrix and Tensor Decomposition on Graphs**

A Book

#### by **Nauman Shahid**

- Publisher : Unknown Publisher
- Release : 2017
- Pages : 329
- ISBN : 9876543210XXX
- Language : En, Es, Fr & De

Mots-clés de l'auteur: Principal Component Analysis ; graphs ; low-rank and sparse decomposition ; clustering ; low-rank tensors.

**Decomposing Matrices, Tensors, and Images**

A Book

#### by **Elina Robeva**

- Publisher : Unknown Publisher
- Release : 2016
- Pages : 195
- ISBN : 9876543210XXX
- Language : En, Es, Fr & De

In this thesis we apply techniques from algebraic geometry to problems arising from optimization and statistics. In particular, we consider data that takes the form of a matrix, a tensor or an image, and we study how to decompose it so as to find additional and seemingly hidden information about its origin and formation. We show that the practical uses of such decompositions are complemented by appealing algebraic and geometric structure. In Chapter 2 of this thesis we focus on matrix shaped data. The singular value decompo- sition, which lies at the core of modern algorithms and can be found efficiently, is not always enough to capture the structure of the data. Often times the matrix at hand as well as the elements of its decomposition are required to have a certain positivity structure, and we need to design algorithms and theory to exploit this structure. Statistical mixture models, for instance, are based on finding a nonnegative decomposition of a nonnegative matrix. We study the algebraic and geometric properties of such decompositions in Section 2.1. Another type of decomposition of a nonnegative matrix, which is useful in convex optimization as well as quantum information theory, is positive semidefinite decomposition. Here we require the elements of the decomposition to be positive semidefinite matrices of a given size. We explore this notion in Section 2.2. One of the most appealing properties of a nonnegative matrix is that we can think of it in terms of a pair of nested polyhedra. We rely on this geometric interpretation when studying nonnegative and positive semidefinite decompositions. In Chapters 3 and 4 we turn our attention to data in the shape of a tensor. It is even more crucial in this case than in the matrix case to find a decomposition, not only because it provides hidden information about the data, but also because it allows us to store the tensor more concisely. However, one of the biggest obstacles in the field is that finding a decomposition of a general tensor is NP-hard. Inspired by the spectral theorem and the singular value decomposition for matrices, we study tensors whose decomposition consists of elements with an orthogonality structure. We call such tensors orthogonally decomposable, or odeco. One of their best properties is that, like matrices, odeco tensors can be decomposed efficiently. In Chapter 3 we study the spectral properties of such tensors. We give a formula for their eigenvectors and singular vector tuples. We note that computing these for a general tensor is hard both algebraically and computationally. In Chapter 4 we study the variety of orthogonally decomposable tensors, and we give polynomial equations that cut it out. We do this by showing that a tensor is orthogonally decomposable if and only if a given algebra that arises from it is associative, yet another appealing property of odeco tensors. Despite all of these appealing properties, odeco tensors constitute a very low-dimensional variety. This is why in Section 4.2 we conclude our study of tensors by generalizing the notion of orthogonally decomposable tensors to that of frame decomposable tensors, which now cover the space of all tensors. In Chapter 5 we study super-resolution imaging. The aim here is, given a low-resolution blurred image, to increase the resolution and remove the blur. This is achieved by decompos- ing the image into a sum of simpler images, one for each point source of light. We encode the locations of the point sources of light and their intensities in a discrete measure, and propose a convex optimization problem in the space of measures to find this unknown measure. We show that in the absence of noise and in the case of a one-dimensional image, the global optimum of this optimization problem recovers the true locations.

**Higher-order Kronecker Products and Tensor Decompositions**

A Book

#### by **Carla Dee Martin**

- Publisher : Unknown Publisher
- Release : 2005
- Pages : 482
- ISBN : 9876543210XXX
- Language : En, Es, Fr & De

The second problem in this dissertation involves solving shifted linear systems of the form (A - lambdaI) x = b when A is a Kronecker product of matrices. The Schur decomposition is used to reduce the shifted Kronecker product system to a Kronecker product of quasi-triangular matrices. The system is solved using a recursive block procedure which circumvents formation of the explicit product.

**Tensor Network Contractions**

Methods and Applications to Quantum Many-Body Systems

#### by **Shi-Ju Ran**

- Publisher : Springer Nature
- Release : 2020-01-01
- Pages : 150
- ISBN : 3030344894
- Language : En, Es, Fr & De

Tensor network is a fundamental mathematical tool with a huge range of applications in physics, such as condensed matter physics, statistic physics, high energy physics, and quantum information sciences. This open access book aims to explain the tensor network contraction approaches in a systematic way, from the basic definitions to the important applications. This book is also useful to those who apply tensor networks in areas beyond physics, such as machine learning and the big-data analysis. Tensor network originates from the numerical renormalization group approach proposed by K.G. Wilson in 1975. Through a rapid development in the last two decades, tensor network has become a powerful numerical tool that can efficiently simulate a wide range of scientific problems, with particular success in quantum many-body physics. Varieties of tensor network algorithms have been proposed for different problems. However, the connections among different algorithms are not well discussed or reviewed. To fill this gap, this book explains the fundamental concepts and basic ideas that connect and/or unify different strategies of the tensor network contraction algorithms. In addition, some of the recent progresses in dealing with tensor decomposition techniques and quantum simulations are also represented in this book to help the readers to better understand tensor network. This open access book is intended for graduated students, but can also be used as a professional book for researchers in the related fields. To understand most of the contents in the book, only basic knowledge of quantum mechanics and linear algebra is required. In order to fully understand some advanced parts, the reader will need to be familiar with notion of condensed matter physics and quantum information, that however are not necessary to understand the main parts of the book. This book is a good source for non-specialists on quantum physics to understand tensor network algorithms and the related mathematics.

**Theory and Computation of Tensors**

Multi-Dimensional Arrays

#### by **Yimin Wei,Weiyang Ding**

- Publisher : Academic Press
- Release : 2016-08-28
- Pages : 148
- ISBN : 0128039809
- Language : En, Es, Fr & De

Theory and Computation of Tensors: Multi-Dimensional Arrays investigates theories and computations of tensors to broaden perspectives on matrices. Data in the Big Data Era is not only growing larger but also becoming much more complicated. Tensors (multi-dimensional arrays) arise naturally from many engineering or scientific disciplines because they can represent multi-relational data or nonlinear relationships. Provides an introduction of recent results about tensors Investigates theories and computations of tensors to broaden perspectives on matrices Discusses how to extend numerical linear algebra to numerical multi-linear algebra Offers examples of how researchers and students can engage in research and the applications of tensors and multi-dimensional arrays

**From Algebraic Structures to Tensors**

A Book

#### by **Gérard Favier**

- Publisher : John Wiley & Sons
- Release : 2020-01-02
- Pages : 318
- ISBN : 1786301547
- Language : En, Es, Fr & De

Nowadays, tensors play a central role for the representation, mining, analysis, and fusion of multidimensional, multimodal, and heterogeneous big data in numerous fields. This set on Matrices and Tensors in Signal Processing aims at giving a self-contained and comprehensive presentation of various concepts and methods, starting from fundamental algebraic structures to advanced tensor-based applications, including recently developed tensor models and efficient algorithms for dimensionality reduction and parameter estimation. Although its title suggests an orientation towards signal processing, the results presented in this set will also be of use to readers interested in other disciplines. This first book provides an introduction to matrices and tensors of higher-order based on the structures of vector space and tensor space. Some standard algebraic structures are first described, with a focus on the hilbertian approach for signal representation, and function approximation based on Fourier series and orthogonal polynomial series. Matrices and hypermatrices associated with linear, bilinear and multilinear maps are more particularly studied. Some basic results are presented for block matrices. The notions of decomposition, rank, eigenvalue, singular value, and unfolding of a tensor are introduced, by emphasizing similarities and differences between matrices and tensors of higher-order.

**Handbook of Robust Low-Rank and Sparse Matrix Decomposition**

Applications in Image and Video Processing

#### by **Thierry Bouwmans,Necdet Serhat Aybat,El-hadi Zahzah**

- Publisher : CRC Press
- Release : 2016-09-20
- Pages : 520
- ISBN : 1498724639
- Language : En, Es, Fr & De

Handbook of Robust Low-Rank and Sparse Matrix Decomposition: Applications in Image and Video Processing shows you how robust subspace learning and tracking by decomposition into low-rank and sparse matrices provide a suitable framework for computer vision applications. Incorporating both existing and new ideas, the book conveniently gives you one-stop access to a number of different decompositions, algorithms, implementations, and benchmarking techniques. Divided into five parts, the book begins with an overall introduction to robust principal component analysis (PCA) via decomposition into low-rank and sparse matrices. The second part addresses robust matrix factorization/completion problems while the third part focuses on robust online subspace estimation, learning, and tracking. Covering applications in image and video processing, the fourth part discusses image analysis, image denoising, motion saliency detection, video coding, key frame extraction, and hyperspectral video processing. The final part presents resources and applications in background/foreground separation for video surveillance. With contributions from leading teams around the world, this handbook provides a complete overview of the concepts, theories, algorithms, and applications related to robust low-rank and sparse matrix decompositions. It is designed for researchers, developers, and graduate students in computer vision, image and video processing, real-time architecture, machine learning, and data mining.

**Theory and Computation of Complex Tensors and its Applications**

A Book

#### by **Maolin Che,Yimin Wei**

- Publisher : Springer Nature
- Release : 2020-04-01
- Pages : 250
- ISBN : 9811520593
- Language : En, Es, Fr & De

The book provides an introduction of very recent results about the tensors and mainly focuses on the authors' work and perspective. A systematic description about how to extend the numerical linear algebra to the numerical multi-linear algebra is also delivered in this book. The authors design the neural network model for the computation of the rank-one approximation of real tensors, a normalization algorithm to convert some nonnegative tensors to plane stochastic tensors and a probabilistic algorithm for locating a positive diagonal in a nonnegative tensors, adaptive randomized algorithms for computing the approximate tensor decompositions, and the QR type method for computing U-eigenpairs of complex tensors. This book could be used for the Graduate course, such as Introduction to Tensor. Researchers may also find it helpful as a reference in tensor research.

**Advances in Knowledge Discovery and Data Mining**

23rd Pacific-Asia Conference, PAKDD 2019, Macau, China, April 14-17, 2019, Proceedings

#### by **Qiang Yang,Zhi-Hua Zhou,Zhiguo Gong,Min-Ling Zhang,Sheng-Jun Huang**

- Publisher : Springer
- Release : 2019-05-20
- Pages : 627
- ISBN : 303016148X
- Language : En, Es, Fr & De

The three-volume set LNAI 11439, 11440, and 11441 constitutes the thoroughly refereed proceedings of the 23rd Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2019, held in Macau, China, in April 2019. The 137 full papers presented were carefully reviewed and selected from 542 submissions. The papers present new ideas, original research results, and practical development experiences from all KDD related areas, including data mining, data warehousing, machine learning, artificial intelligence, databases, statistics, knowledge engineering, visualization, decision-making systems, and the emerging applications. They are organized in the following topical sections: classification and supervised learning; text and opinion mining; spatio-temporal and stream data mining; factor and tensor analysis; healthcare, bioinformatics and related topics; clustering and anomaly detection; deep learning models and applications; sequential pattern mining; weakly supervised learning; recommender system; social network and graph mining; data pre-processing and feature selection; representation learning and embedding; mining unstructured and semi-structured data; behavioral data mining; visual data mining; and knowledge graph and interpretable data mining.

**Multilinear Operators for Higher-order Decompositions**

A Book

#### by **Tamara Gibson Kolda**

- Publisher : Unknown Publisher
- Release : 2006
- Pages : 28
- ISBN : 9876543210XXX
- Language : En, Es, Fr & De

We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The first operator, which we call the Tucker operator, is shorthand for performing an n-mode matrix multiplication for every mode of a given tensor and can be employed to concisely express the Tucker decomposition. The second operator, which we call the Kruskal operator, is shorthand for the sum of the outer-products of the columns of N matrices and allows a divorce from a matricized representation and a very concise expression of the PARAFAC decomposition. We explore the properties of the Tucker and Kruskal operators independently of the related decompositions. Additionally, we provide a review of the matrix and tensor operations that are frequently used in the context of tensor decompositions.

**Tensor Spaces and Numerical Tensor Calculus**

A Book

#### by **Wolfgang Hackbusch**

- Publisher : Springer Science & Business Media
- Release : 2012-02-23
- Pages : 500
- ISBN : 3642280277
- Language : En, Es, Fr & De

Special numerical techniques are already needed to deal with nxn matrices for large n.Tensor data are of size nxnx...xn=n^d, where n^d exceeds the computer memory by far. They appear for problems of high spatial dimensions. Since standard methods fail, a particular tensor calculus is needed to treat such problems. The monograph describes the methods how tensors can be practically treated and how numerical operations can be performed. Applications are problems from quantum chemistry, approximation of multivariate functions, solution of pde, e.g., with stochastic coefficients, etc.