What's a Grassmann Number?
I must have missed the discussion around Grassmann variables back when I was taking the basic courses, because they keep popping up around fermionic integrals, but I can't recall ever having it defined for me anywhere. With the way I like to study, that's bound to become a problem sooner or later. It's a concept that will show up heavily in super-symmetry, maybe as a shortcut to the purely algebraic approach to the parity. This means that Grassmann numbers will primarily be concerned with being either even or odd. From particle and quantum physics in particular we are aware of two different kinds of products.
where objects with parity 0 are considered even, and objects with parity 1 odd. Graded commutative products are either commutators or anticommutators, and graded anticommutative product flips the relations. For vectors the parity only applies for multiplications with scalars, and matrices are even if it preserves parity of graded vectors, odd if inverts it. It's usually possible to decompose matrices and vectors into odd and even parts.
A Grassmann number itself is a point of an exterior algebra generated by a set of n Grassmann variables, which can be thought of as analogous to directions, or supercharges. n needn't necessarily be finite. It's best to think of the Grassmann variables as the basis elements of a unital algebra. The terminology stems primarily from their roots of defining integrals, which we'll circle back to later.
With this construction, it's easy to think of Grassmann variables to span a vector space of dimension n, and form an algebra over a field (usually complex numbers), with anti-commuting generators. The generators themselves obviously commute with elements of the field. This has the nice effect that Grassmann variables are non-zero square roots of zero, and can be cancelled through squaring.
Formally, Grassmann numbers are defined first through a Grassmann algebra, which can be constructed through the direct sum of exterior product exponentials of some n-dimensional complex vector space V, whose basis consists of a set of n Grassmann variables.
The general Grassmann number then is a point in this space, constructed by the linear combination of the Grassmann variables from V's basis. From the anti-commutating relation of the basis and the binomial theorem, the Grassmann algebra will always have dimension 2^n. The general case will feature an infinite-dimensional V, and so the general element (or the general Grassmann number) will have a representation of
The 0-component is referred to as the "body" and the rest as the "soul" of the supernumber z. Of course, in finite cases, the soul is nilpotent.
Seeing as we can have something like a "Grassmann Matrix", we should consider the concept of a "Grassmann Operator". Usually this is done using an integral operator I and a derivative operator D, as these are the ones not commonly written as an integral. For ease of legibility, let's define them so that DI = ID = 0. This decouples the the application of both operators from the integration variable. We should be familiar of this construction from the considerations that will lead to the integration by parts technique, and to a lesser degree Stokes' theorem on a set with an empty boundary. Though this reasoning is more empirical, rather than deductive, let's use the considerations of Stokes' theorem to impose that
The Berezin integral then is a form of derivation. For the integration constant, it's usually convenient to take that of the Fourier transform.
To be able to use this in real application, we'll need to define how this new algebra reacts to our usual analytical operators. An volume form M of degree D on an ordinary manifold of degree D is referred to as a "top form", since it's maximal by degree and antisymmetry. In Grassmann calculus, forms are assumed to be symmetric, and so there are no top forms on superalgebras (0|m) on real numbers. This hole in definitions will require some supplementation, which is done through a new definition of volume elements within the volume form. In algebra (and of course topology) the go-to tool for this are ascending and descending complexes. For a space of p-forms A on M define the exterior differentiation as
The graded algebra A· is an ascending complex with respect to the operator d, if
Define the descending complex dually, exchanging A for the space of p-densities D, and the divergence operator ∇.
To circle back to the Berezin integral, since that is the method used in application for fermionic integrals, let's remind ourselves of the method to change (integration) variables.
with the Jacobian matrix D. It implies that the ξ must also be antisymmetric, for the derivatives to work out properly. A free superalgebra of dimension (m|n) holds functions of m even variables (bosonic & commuting) and of n odd variables (fermionic & anticommuting). It's Berezin integral can be split into an m-dimensional integral over the real numbers, and an n-dimensional integral over the free superalgebra. Considering we used the Berezin Integral to help get an understanding of the Grassmann numbers, we should, for completeness sake, take a look at the change of variables. Grassmann numbers can be even or odd, and we've touched on how changes between integrals with the same parity is done without alterations to the usual process. Take a set of even polynomials x and odd polynomials θ of a common variable ξ(y), where y is an even variable. The Jacobian has the usual block form, although each even derivative of y commutes with all elements of the algebra Λ ⁿᴵᵐ. Analogously, the odd derivates commute with even elements, and anticommute with odd elements. By construction of the Jacobian, the entries of diagonal entries are even, and the off ones are odd. The Superdeterminant of the matrix J
is defined for invertible determinants within the algebra. With this, the generational transformation becomes