MTH Logic and Linear Algebra (For ND). MTH Logic and Linear Algebra . OUTLINE: 1. Logic and Abstract Thinking 2. Permutation and. ing how to represent proofs in linear logic as linear maps between vector spaces. The algebra: the structure of linear logic is encoded in its connectives, deduction rules, and skinabnipartka.cf˜street/skinabnipartka.cf Linear Logic and Linear Algebra. FinVect: ▻ Interpret a type as a finite dimensional vector space. (over a finite field). ▻ Interpret a judgment as.

    Author:DORRIS ALBERTHAL
    Language:English, Spanish, Arabic
    Country:Colombia
    Genre:Children & Youth
    Pages:137
    Published (Last):15.01.2016
    ISBN:892-2-45700-537-1
    Distribution:Free* [*Register to download]
    Uploaded by: SHALONDA

    64206 downloads 125261 Views 28.83MB PDF Size Report


    Logic And Linear Algebra Pdf

    Linear Logic and Linear Algebra. FinVect: ▷ Interpret a type as a finite dimensional vector space. (over a finite field). ▷ Interpret a judgment as. 1 What is Linear Algebra? 9 .. Linear algebra is the study of vectors and linear functions. basic logical operations is a must to do well in linear algebra. I and Geometry Gordon and Breach Science Publishers LINEAR ALGEBRA AND GEOMETRY ALGEBRA, LOGIC AND APPLICATIONS A Author: DOWNLOAD PDF ALGEBRA, LOGIC AND APPLICATIONS A Series edited by R. Gdbel.

    Learn more about reviews. However, there are very few figures and little discussion of a geometric perspective which admittedly the However, there are very few figures and little discussion of a geometric perspective which admittedly the author notes in the first chapter, saying "While much of our intuition will come from examples in two and three dimensions, we will maintain an algebraic approach to the subject, with the geometry being secondary. Others may wish to switch this emphasis around, and that can lead to a very fruitful and beneficial course, but here and now we are laying our bias bare. There isn't really something I'd call an index or glossary, in the sense of being an alphabetized reference. Accuracy I did not find any errors. The text is arranged in such a way that updates would be easy to add. Since linear algebra is so important in computer animation, the lack of examples dealing with this application makes the book feel a little out-of-date.

    Remarks regarding notation. It is very convenient, but not entirely consistent, to denote the zero element and addition in K and L by the same symbols. Here are two examples of cases when a different notation is preferable. We regard L as an abelian group with respect to multiplication and we introduce in L multiplication by a scalar from R according to the formula a, z za.

    It is easy to verify that all conditions of Definition 1. We define a new vector space L with the same additive group L, but a different law of multiplication by a scalar: a, l i-4 al, where a is the complex conjugate of a. Remarks regarding diagrams and graphic representations. Many general concepts and theorems of linear algebra are conveniently illustrated by diagrams and pictures.

    We want to warn the reader immediately about the dangers of such illustrations. We live in a three-dimensional space and our diagrams usually portray two- or three-dimensional images. In linear algebra we work with space of any finite number of dimensions and in functional analysis we work with infinite-dimensional spaces.

    Our "low-dimensional" intuition can be greatly developed, but it must be developed systematically. Here is a simple example: how are we to imagine the general arrangement of two planes in four-dimensional space? Imagine two planes in R3 intersecting along a straight line which splay out everywhere along this straight line except at the origin, vanishing into the fourth dimension.

    A First Course in Linear Algebra

    The physical space R3 is linear over a real field. The unfamiliarity of the geometry of a linear space over K could be associated with the properties of this field. A straight line over C is a one-dimensional coordinate space C'. It is, however, natural to imagine multiplication by a complex number a, acting on C', in a geometric representation of C1 in terms of R2 "Argand plane" or the "complex plane" - not to be confused with C2!

    Here finitedimensional coordinate spaces are finite, and it is sometimes useful to associate discrete images with a linear geometry over K. If it is stipulated that the points I , This means that not only are addition of vectors and multiplication by a scalar defined in this space, but the lengths of vectors, the angles between vectors, the areas and volumes of figures, and so on are also defined. Our diagrams carry compelling information about these "metric" properties and we perceive them automatically, though they are in no way reflected in the general axiomatics of linear spaces.

    It is impossible to imagine that one vector is shorter than another or that a pair of vectors forms a right angle unless the space is equipped with a special additional structure, for example, an abstract inner product.

    Chapter 2 of this book is devoted to such structures. Do the following sets of real numbers form a linear space over Q? Let S be some set and let F S be the space of functions with values in the field K.

    Which of the following conditions are linear? Let L be the linear space of continuous real functions on the segment [-1,1]. Which of the functionals on L are linear functionals? I like the way that examples require the reader to click on their titles, so that the titles serve to break up the "wall of text" while the amount of information presented initially isn't overwhelming. The given organization of topics is clear and logical.

    The idiosyncratic chapter naming convention does make navigating via hyperlinks a little confusing, though. Interface rating: 5 There weren't any navigation issues or problems with distracting display features. I liked the choices of what to have visible when a section is opened and what requires an additional click. Grammatical Errors I did not notice any grammatical errors.

    To determine these values, we introduce the concept of the sum of linear subspaces. Let L1 , The following theorem relates the dimension of the sum of two subspaces and their intersection.

    L1 and L2. We shall say that such a pair of bases in L1 and L2 is concordant. The assertion of the theorem follows from here: Therefore, we have only to verify its linear independence.

    Lei n1 pendent vectors in L: Finally we extend these unions up to two bases of L. The linear automorphism that transforms the first basis into the second one establishes the fact that L1, L2 and Li, LZ have the same arrangement. General position. In the notation of the preceding section, we shall say that the subspaces L1, L2 C L are in general position if their intersection has the smallest dimension and their sum the greatest dimension permitted by the inequalities of Corollary 5.

    For example, two planes in three-dimensional space are in the general position if they intersect along a straight line, while two planes in a four-dimensional space are in the general position if they intersect at a point. The term "general position" originates from the fact that in some sense most pairs of subspaces L1, L2 are arranged in the general position, while other arrangements are degenerate.

    This assertion can be refined by various methods. One method is to describe the set of pairs of subspaces by some parameters and verify that a pair is not in the general position only if these parameters satisfy additional relations which the general parameters do not satisfy.

    It is also possible to study the invariants characterizing the relative arrangement of triples, quadruples, and higher numbers of subspaces of L. The combinatorial difficulties here grow rapidly, and in order to solve this problem a different technique is required; in addition, beginning with quadruples, the arrangement is no longer characterized just by discrete invariants, such as the dimensions of different sums and intersections.

    We note also that, as our "physical" intuition shows, the arrangement, say, of a straight line relative to a plane, is characterized by the angle between them.

    In a purely linear situation, there is only the difference between a "zero" and a "non-zero" angle. We shall now study n-tuples of subspaces.

    A space L is a direct sum of its subspaces L1, Let L1, If Proof. Inverting this ,-r argument we find that the violation of the condition a implies the non-uniqueness of the representation of zero.

    According to Theorem 5. We shall now examine the relationship between decompositions into a direct sum and special linear projection operators. The linear operator p: Their linearity and the property p? Let pi,.. L -y L be a finite set of projection operators satisfying the conditions n i. To prove that this sum is a direct sum we shall apply the criterion a of Theorem 5. Direct complements. External direct suns. Thus far we have started from the set of subspaces L1, Now, let L1, We shall define their external direct sum L as follows: In , where l; E Li.

    MANIN a ll, It is not difficult to verify that L satisfies the axioms of a linear space. The mapping f;: Identifying L; with fi Li , we obtain a linear space which contains Li and decomposes into the direct sum of Li. This justifies the name external direct sum. Direct sums of linear mappings. L -y M be a linear mapping such that f Li C Mi. We denote by fi the induced linear mapping Li. The external direct sum of linear mappings is defined analogously.

    Choosing bases of L and M that are the union of the bases of L; and M; respectively, we find that the matrix off is the union of blocks, which are matrices representing the mappings fi lying along the diagonals; the other locations contain zeros. Orientation of real linear spaces. Let L be a finite-dimensional linear space over the field of real numbers.

    L that maps e; into e. However, we pose a more subtle question: Only the elements of the matrix of f in some basis must vary continuously as a function of t. For this there is an obvious necessary condition: The converse is also true: This assertion can, obviously, be formulated differently: We shall prove this assertion by dividing it into a series of steps. B,,, where A and Bi are matrices with positive determinants.

    If all the B1 can be connected to E by a continuous curve, then so can A. The device of changing the scale and the origin of t is used only because we stipulated that the curves of the matrices be parametrized by numbers t E [0, 1]. Obviously, any intermediate parametrization intervals can be used, all required deformations can be performed successively, and the scale need be changed only at the end. Therefore, in what follows, we shall ignore the parametrization intervals.

    We denote by E,t the matrix with ones at the location s,t and zeros elsewhere. Assuming that its determinant is positive, we shall show how to connect it to E with the help of several successive deformations, using the results of the steps a and b above. By varying A in the starting cofactors from the initial value to zero we can deform all such factors into E, and we can therefore assume that they are absent at the outset.

    The matrices F, A are diagonal: A stands in the location s,s and ones elsewhere. The deformation will yield either the unit matrix or the matrix of the linear mapping that changes one of the basis vectors into the opposite vector, leaving the remaining basis vectors unaffected.

    At this stage the result of the deformation of A will be the matrix of the composition of two transformations: The matrix to I 0 0 by the curve cost: The proof is completed by collecting all -1's in pairs and performing such deformations of all pairs. We now return to the orientation. It is clear that the set of ordered bases of L is divided precisely into two classes, consisting of identically oriented bases, while the bases of different classes are oriented differently or oppositely.

    The choice of one of these classes is called the orientation of the space L. The orientation of a one-dimensional space corresponds to indicating the "positive direction in it", or the half-line R. Reversing the sign of one of the vectors e; reverses the orientation. In the three-dimensional physical space the choice of a specific orientation can be related to human physiology: In most people the heart is located on the left side. The question of whether or not there exist purely physical processes which would distinguish the orientation of space, i.

    Which type should be regarded as the general one? Prove that the triples of pairwise distinct straight lines in K3 are all identically arranged and that this assertion is not true for quadruples.

    Prove that if Li C Do the same problem for direct decompositions. Let p: L L be a projection operator. Based on this, show that in an appropriate basis of L any projection operator p can be represented by a matrix of the form 6. Let L be an n-dimensional space over a finite field consisting of q elements.

    Quotient Spaces 6. We shall shortly verify that these translations do not necessarily have to be linear subspaces in L; they are called linear subvarieties. We shall start by proving the following lemma: Thus any linear subvariety uniquely determines a linear subspace M, whose translation it is.

    The translation vector, however, is determined only to within an element belonging to this subspace. First, let 11 - 12 E M. But when m runs through all vectors in M, m - mo also runs through all vectors in M. Since 0 E M2, we must have mo E M1. This completes the proof. Verification of the correctness of the definition. This consists of the following steps: Actually, Lemma 6. Therefore, once again according to Lemma 6. It remains to verify the axioms of a linear space. For example, one of the distributivity formulas is verified thus: Here the following are used in succession: It is surjective, and its fibres - the inverse images of the elements - are precisely the subvarieties corresponding to these elements.

    Indeed, according to Lemma 6. Apply Theorem 3. In this case, Corollary 6.

    We pose the following problem. Given two mappings f: M and I - N, when does a mapping h: In 46 A. MANIN cf. The answer for linear mappings is given by the following result. For h to exist it is necessary and sufficient that ker f C ker g. Therefore, ker f C ker g. Conversely, let ken f C kerg. We first construct h on the subspace imf C M.

    It is necessary to verify that h is determined uniquely and linearly on im f.

    Linear Algebra and Geometry (Algebra, Logic and Applications)

    The second property follows automatically from the linearity of f and g. Now it is sufficient to extend the mapping h from the subspace im f C M into the entire space M, for example, by selecting a basis in im f , extending it up to a basis in M, and setting h equal to zero on the additional vectors. Let g: We have already defined the kernel and the image of g. Fredholm's finite-dimensional alternative.

    L -' M be a linear mapping. It follows from the preceding section that if L and M are finite-dimensional, then the index g depends only on L and M: This implies the so-called Fredholm alternative: Prove that the following mapping is a linear isomorphism: Then the canonical mapping is an isomorphism.

    Duality 7. It is enough to say that the "wave-particle" duality in quantum mechanics is adequately expressed precisely in terms of the linear duality of infinite-dimensional linear spaces more precisely, the combination of linear and group duality in the Fourier analysis. Symmetry between L and V. Instead of f l we shall write f, 1 the symbol is analogous to the inner product, but the vectors K.

    It is are from different spaces! We have thus defined the mapping L' x L linear with respect to each of the two arguments f,1 with the other held fixed: In general, mappings L x M K with this property are said to be bilinear, as well as pairings of the spaces L and M. The mapping CL: If dim L 7. Symmetry between dual bases. Thus e' and ek form a dual pair of bases, and this relation is symmetric. This formula is completely analogous to the formula for the inner product of vectors in Euclidean space, but here it relates vectors from different spaces.

    Dual or conjugate mapping. M be a linear mapping of linear spaces. We shall now show that there exists a unique linear mapping f': Let Jr and fz be two such mappings. We fix m' and vary I. Then the element fl - fz m' E L', as a linear functional on L, assumes only zero values and hence equals zero. We fix m' E M and regard the expression m', f l as a function on L.

    The linearity off and the bilinearity of , imply that this function is linear. Hence it belongs to L. We denote it by f' m'. This means that f' is a linear mapping.

    Assume that bases have been selected in L and M and dual bases in L' and W. Let f in these bases be represented by the matrix A.

    We assert that f' in the dual bases is represented by the transposed matrix A. Indeed, let B be the matrix of f'. The basic properties of the conjugate mapping are summarized in the following theorem: If it is assumed that L and M are finite-dimensional, then it is simplest to verify all of these asertions by representing f and g by matrices in dual bases and using the simple properties of the transposition operation: We leave it as an exercise to the reader to verify invariance.

    Duality between subspaces of L and L. Let M C L be a linear subspace. We denote by Ml C L' the set of functionals which vanish on M and call it the orthogonal complement of M.

    The following assertions summarize the basic properties of this construction L is assumed to be finite-dimensional. It is constructed as follows: It is independent of the choice of 1', because the restrictions of the functionals from M1 to M are null restrictions. The linearity of this mapping is obvious. It is surjective, because any linear functional on M extends to a functional on L. The functional f on M given by the values f el Indeed, it has a null kernel: Indeed, this follows from the preceding assertion, Corollary 6.

    The proof is left as an exercise. Let the linear mapping g: Hence derive "lredholm's third theorem": The sequence of linear spaces and mappings L f M 9. Check the following assertions: We know that if the mapping f: L - M in some bases can be represented by the matrix A, then the mapping f in the dual bases can be represented by the matrix At. Deduce that the rank of a matrix equals the rank of the transposed matrix, i.

    The Structure of a Linear Mapping In this section we shall begin the study of the following problem: When L and M are entirely unrelated to one another, the answer is very simple: In matrix language, we are talking about putting the matrix of f into its simplest form with the help of an appropriate basis, specially adapted to the structure off. We are interested in the invariants of the arrangement of r f in L D M. For the case when the bases in L and M can be selected independently, the answer is given by the following theorem.

    L Al be a linear mapping of finite-dimensional spaces. We need only verify that f determines an isomorphism of M1 is injective, because the kernel of f, i. L1 Lo, intersects L1 only at the origin. Obvi- ously, f el, K'" with this matrix and then apply to it the assertion b above. We now proceed to the study of linear operators. We begin by introducing the simplest class: The linear operator f: L - L is diagonalizable if either one of the following two equivalent conditions holds: There exists a basis of L in which the matrix off is diagonal.

    The equivalence of these conditions is easily verified. Diagonalizable operators form the simplest and, in many respects, the most important class of operators. For example, over the field of complex numbers, as we shall show below, any operator can be diagonalized by changing infinitesimally its matrix so that the operator "in the general position" is diagonalizable. To understand what can prevent an operator from being diagonalizable, we shall introduce two definitions and prove one theorem.

    If Ll is such a subspace, then the effect off on it is equivalent to multiplication by a scalar A E K. This scalar is called the eigenvalue off on Ll.

    We shall determine when f has at least one proper subspace. Let L be a finite-dimensional linear subspace. L L be a linear operator and A its matrix in some basis. We denote by P t the polynomial det tE - A with coefficients in the field K det denotes the determinant and call it the characteristic polynomial of the operator f and of the matrix A.

    Therefore, using the multiplicativity of the determinant, we find Proof. Then the mapping A A. Let 1 54 0 be an element from the kernel. We now see that the operator f, in general, does not have eigenvalues and is therefore not diagonalizable if its characteristic polynomial P t does not have roots in the field K. This is entirely possible over fields that are not algebraically closed, such as R and finite fields. The algebraic closure of K is equivalent to either of the following two conditions: In this case, the number r; is called the multiplicity of the root Ai of the polynomial P t.

    The set of all roots of the characteristic polynomial is called the spectrum of the operator f. If all multiplicities are equal to one, the spectrum off is said to be simple. If the field K is algebraically closed, then according to Theorem 8. However, it may nevertheless happen that it is non-diagonalizable, because the sum of all proper subspaces may happen to be less than L, whereas for a diagonalizable operator it always equals L.

    Before considering the general case, we shall examine complex 2 x 2 matrices. Let L be a two-dimensional complex space with a basis. In this basis the operator f: We shall examine separately the following two cases: Let el and e2 be the characteristic vectors corresponding to Al and A2 respectively.

    Here the operator f is diagonalizable, only if it multiplies by a A all vectors from L. An example of such a matrix is A A. This matrix is called a Jordan block with the dimension 2 x 2 or rank 2. We give the following general definition: MANIN 56 8. Therefore, A 1 0 0 Al d ei, Aside from the geometric considerations examined above, in the next chapter we shall need algebraic information about polynomial functions of operators.

    L - L be a fixed operator. Non-zero polynomials that annihilate f always exist if L is finitedimensional. This discussion shows that there exists a polynomial of degree proximates f, and which has the lowest possible degree.

    It is called the minimal polynomial of f. Obviously, it is uniquely defined: We decompose Q with a remainder on M: We select an eigenvalue A of the operator f and a one-dimensional proper subspace L1 C L corresponding to A. The operator f determines the linear mapping f: The characteristic polyno- mial of f is t - A ''. Furthermore, 0 On the other hand, AE, and J,. Are these assertions true if the spectrum of f is not'simple?

    Let f, g: L - L be linear operators in an n-dimensional space over a field with characteristic zero. Prove that the eigenvalues of g have the form a, a -1, a - 2,. The Jordan Normal Form The main goal of this section is to prove the following theorem on the existence and uniqueness of the Jordan normal form for matrices and linear operators.

    Lei K be an algebraically closed field, L a finite-dimensional linear space over K, and f: L --i. L a linear operator. The proof of the theorem is divided into a series of intermediate steps. An operator which when raised to some power is equal to zero is said to be nilpotent. Thus the operator f - A is nilpotent on the subspace corresponding to the block Jr A. The same is true for its restriction to the sum of subspaces for fixed A. This motivates the following definition.

    All eigenvectors are evidently root vectors. We denote by L A the set of root vectors of the operator f in L corresponding to A. Therefore, L A is a linear subspace. Conversely, let I E L A , 1 0. We check the fol' wing series of assertions. Therefore, substituting f for t, we have i. Indeed we have already verified that Li C L Ai.

    If the spectrum of an operator f is simple, then f is diagonalizable. We now fix one of the eigenvalues A and prove that the restriction off to L A has a Jordan basis, corresponding to this value of A. Then, according to the Cayley-Hamilton theorem, the operator f is nilpotent: We shall now prove the following proposition.

    A nilpotent operator f on a finite-dimensional space L has a Jordan basis; the matrix off in this basis is a combination of blocks of the form Jr 0.

    If we already have a Jordan basis in the space L, it is convenient to represent it by a diagram D, similar to the one shown here. In this diagram, the dots denote elements of the basis and the arrows describe the action of f in the general case, the action of f - A.

    The operator f transforms to zero the elements in the lowest row, that is, the eigenvectors of f entering into the basis occur in this row. Each column thus stands for a basis of the invariant subspace, corresponding to one Jordan block, whose dimension equals the height of this column the number of points in it: We shall prove existence by induction on the dimension of L. We denote by A. The correctness of the definition of f and its linearity are obvious.

    According to the induction hypothesis, f has a Jordan basis. We can assume that it is non-empty. Let us construct the diagram b for elements of the Jordan basis of J. We shall now construct the diagram D of vectors of the space L as follows.

    We select a basis of the linear span of the vectors f hz ei , Thus the diagram D consisting of vectors of the space L together with the action of f on its elements has exactly the form required for a Jordan basis.

    We have only to check that the elements of D actually form a basis of L. We shall first show that the linear span of D equals L. Therefore I can be represented as a linear combination of the elements of D. It remains to verify that the elements of D are linearly independent. First of all, the elements in the bottom row of D are linearly independent. Finally, we shall show that if there exists a non-trivial linear combination of the vectors of D equal to zero, then it is possible to obtain from it a non-trivial linear dependence between the vectors in the bottom row of D.

    Indeed, consider the top row of D, which contains the non-zero coefficients of this imagined linear combination.

    A First Course in Linear Algebra - Open Textbook Library

    Let the number of this row counting from the bottom be h. We apply to this combination the operator f h- 1. Evidently, the part of this combination corresponding to the hth row will transform into a non-trivial linear combination of elements of the bottom row, while the remaining terms will vanish.

    This completes the proof of the proposition. Now we have only to verify the part of Theorem 9. Let an arbitrary Jordan basis of the operator f be fixed. Any diagonal element of the matrix f in this basis is obviously one of the eigenvalues A of this operator.

    Hence the sum of the dimensions of the Jordan blocks, corresponding to each A;, is independent of the choice of Jordan basis and, moreover, the linear spans of the corresponding subsets of the basis L y are basis-independent.

    The dimensions of the Jordan blocks are the heights of its columns; if the columns in the diagram are arranged in decreasing order, these heights are uniquely determined if the lengths of the rows in the diagram are known, beginning with the bottom row, in decreasing order. Indeed, we take any eigenvector I for f and represent it as a linear combination of the elements of D.

    All vectors lying above the bottom row will appear in this linear combination with zero coefficients. This means that the bottom row of D forms a basis of Lo, so that its length equals dim Lo; hence, this length is the same for all Jordan bases.

    This completes the proof of uniqueness A. Then the problem of reducing A to Jordan form can be solved as follows. Calculate the characteristic polynomial of A and its roots. Calculate the dimensions of the Jordan blocks, corresponding to the roots For this, it is sufficient to calculate the lengths of the rows of the corresponding diagrams, that is, dimker A - A , dimker A - A 2 - dimker A - A , dimker A - A 3 -dimker A - A 2, The space of solutions of this linear system of equations will, A.

    But according to the existence theorem, non-singular solutions necessarily exist; any one can be chosen. Assume, for example, that we must find a large power AN of the matrix A. The point is that the matrix X is calculated once and for all and does not depend on N. The same formula can be used to estimate the growth of the elements of the matrix AN. Indeed, we shall restrict ourselves for simplicity to the case of a field with zero characteristic. Other normal forms.

    In this section, we shall briefly describe other normal forms of matrices which are useful, in particular, for algebraically open fields. The space L is said to be cyclic with respect to an operator f, if L contains a vector 1, also called a cyclic vector, such that 1, f 1 , , f"' 1 1 form a basis of L. The matrix of f in this basis is called a cyclic block. Conversely, if the matrix off in the basis el, We shall show that the form of the cyclic block corresponding to f is indepen- dent of the choice of the starting cyclic vector.

    For this, we shall verify that the first column of the block consists of the coefficients of the minimal polynomial of the operator f: On the other hand, if N t is a polynomial of degree with the characteristic polynomial.

    We shall not prove this assertion. The proof is analogous to the proof of the Jordan form theorem. Instead of the factors t -. The uniqueness theorem is also valid here, if we restrict our attention to the case when the minimal polynomials of all cyclic blocks are irreducible. Without this restriction it is not true: Prove that there exist combinations of numbers.

    Let y z be a function of a complex variable x, satisfying a differential equation of the form n-i aiEC. Using the results of Exercises 1 and 2, prove that y z can be represented in the form E eai 'Pi x , where Pi are polynomials. How are the numbers Ai related to the form of the differential equation? Let J, A be a Jordan block on C. Prove that the matrix obtained by introducing appropriate infinitesimal displacements of its elements will be diagonalizable. Extend the results of Exercise 4 to arbitrary matrices on C, using the facts that the coefficients of the characteristic polynomial are continuous functions of the elements of the matrix and that the condition for the polynomial not to have degenerate roots is equivalent to the condition that its discriminant does not vanish.

    Give a precise meaning to the following assertions and prove them: Normed Linear Spaces In this section we shall study the special properties of linear spaces over real and complex numbers that are associated with the possibility of defining in them the concept of a limit and constructing the foundation of analysis. These properties play a special role in the infinite-dimensional case, so that the material presented is essentially an elementary introduction to functional analysis.

    The pair E, d , where E is a set and d: E x E - R is a realvalued function, is called a metric space, if the following conditions are satisfied for allz,y,zEE: This is the so-called natural metric.

    In Chapter 2 we shall study it systematically and we shall study its extensions to arbitrary basic fields in the theory of quadratic forms. Here are three of the most important metrics: The triangle inequality for d2 in example b and d3 in example c will be proved in Chapter 2. This is one of the discrete metrics on E.

    Related:


    Copyright © 2019 skinabnipartka.cf. All rights reserved.