\( \mu(\mathrm{d}x) \) and \( \mathrm{d}\mu(x) \)

't is known that if you begin from Lebesgue point of view you naturally end up with notation where can be thought of as an infitesimal set; Riemann on the other hand leads you to which stands for "change of measure". This is all of course a birdish language which doesn't define anything for real and does not help computation.

Now, I really hate all these limit-centric notations. In smooth and bounded non-random finite-dimensional Euclidean world (in other words in any naive calculus) we most elegantly get rid of it, saying that is a differential form, that is it maps each into a multilinear function -- a tangent, a multilinear approximation of the integral curve. Then even though the action of the linear operator may involve limits in its definition -- which is perfectly fine -- our is a pretty understandable object that has specific type. The integral curve is naturally characterized as the one with given initial value and tangents at each point. This in a rigorous way captures the original intuition of operating with "infinitesimal increments". The bird-language phrasing "for every small enough a " actually means that we define a linear map that turns every into approximation of .

But why -- it feels like I fail to construct even for myself an analogous interpretation for all these or .

The naive way to describe the latter is to say simply that it is a random linear operator -- the derivative of for any fixed outcome .

Yet with the set... isn't very linear a space and it isn't about linearity neither. Also this notation isn't going in line with the main idea -- that we're actually splitting the image space instead of the domain.

Комментарии

Comments powered by Disqus