# One more trip home

Gray sky in the window, "Atom Heart Mother" in the earplugs, an old man in white shirt eating a sandwitch wrapped in celophane. Two sleeping kids are wearing badges, they must've attended some sort of olympiad. All sleep. Finally, silence. And "Atom Heart Mother".

11:09 The old man sleeps, hidden his head in his synthetic jacket hanging on a hook right of the window

Link's too unstable for interactive SSH session, so I'm limited to sending fire-once scripts during short timespans when connection shows up

Alan's Psychodelic Breakfast

# Immersions, submersions, embeddings

Some tldr-excerpts from Lee and Spivak formalizing embeddings and stuff.

Topological embedding -- an injective continuous map $f: A\to X$ that is also a homeomorphism onto its image $f(A)$. We can think of $f(A)$ "as a homeomorphic copy of $A$ in $X$" (Lee, 2011).

A smooth map $F: M\to N$ is said to have rank $k$ at $p \in M$ if the linear map ${F_*}_{T_p M}$ ( the pushforward) has rank $k$. $F$ is of constant rank $k$ if it is of rank $k$ at every point.

Immersion -- smooth map $F:M \to N$ whose pushforward $F_*$ is injective at every point, that is $\operatorname{rank} F = \operatorname{dim} M$.

Submersion -- smooth map $F:M \to N$ whose pushforward is surjective at every point, that is $\operatorname{rank} F = \operatorname{dim} N$.

(Smooth) Embedding (of a manifold) -- an injective immersion $F: M\to N$ that is also a topological embedding.

So, a map $F: M\to N$ is an embedding, if

1. $\operatorname{rank}F = \operatorname{dim} M$,
2. $F$ is injective,
3. $F$ is a homeomorphism onto $F(M)$ with subspace topology.

# Tangents in geoopt

Write your post here.

# Cone

A cone over topological space $X$ is the quotient

A point $a$ of that cone can be identified with a point $x\in X$ and the distance $\lvert Ox \rvert$ to origin (the apex fiber) $O = X{\times}\{0\}$.

The reason I care about cones is the notion of the tangent cone of a metric space at a point.

# Must needs

### Must needs be

Whenever I encounter constructs like "It_1 must needs be X" I tend to decompose it

"it must be that it_1 needs to be X",

rather than into

"it must be X" intensified by an adverb "needs",

which seem to be the consensus.

### Needs must

There's also archaic "needs must" in which "needs" seems to be a noun and they (the "needs") actually "must":

If needs must, I'll do it.

Finally, there's other "must needs" in which "needs" acts as an amplifier and some knowledgeable people identify it as an adverb:

### Shall have to

I also have just encountered a construct "shall have to":

# Double negations

It might seem English rather discourages the use of double negations, so that the following sentences, if at all parseable, are likely to be taken as a sign of lack of education:

1. I haven't got no money.

2. I never don't do that.

The reason these sentences feel smelly is that they contain double negations which technically cancel each other, so that the sentences above might read:

1. I'm not in the state of lack of money

2. It never happens that I don't do that (e.g. never happens that I forget to do that)

But because of low likelihood of the original constructs, one would rather assume the message contain a mistake.

The second example contrasts with the situation we got in French and Russian, where we use what might seem like double negatives:

1. Я (I) никогда (NEVER) не (NOT) делаю (do) этого (that)

2. Я (I) никогда (NEVER) не (NOT) курю (smoke).

3. Je (I) ne fais (not do) jamais (NEVER) ca (that)

Those aren't really double negatives, it's rather that the scopes of verbs and negations are propagated differently, and actually omitting the "никогда" or "jamais" would lead to a contradiction in the message. For instance, the sentence

Я (I) никогда (NEVER) курю (smoke)

might be interpreted as comprised of claims:

1. I do smoke ("я курю").
2. The modality of this event, i.e. the answer to the question "how often that happens?" is: "never" ("никогда").

The two are in conflict with each other and while one could try and use this construct to deliver the idea of him not smoking, its likelihood is neglectible.

Now it seems that in Francais (though I don't really understand French yet) the situation is the same as we say:

Jamais (NEVER) plus (more) Je ne (NOT) Te dirai (will say)

while the sentence without "ne":

Jamais plus je te dirai

does sound contradictive, just as it would in Russian.

To emphasize the difference with English, let's note that we'd rather encode the message "Je ne fais jamais ca" as

I don't ever do that

Which can be decomposed to

1. I don't do that

2. My behaviour is consistent, i.e. I always ("ever") choose the policy "not do that"

My friend has given me a hint this might be coming from Latin in which both Russian and Francais have roots.

### Double negatives in English

It wouldn't be true, however, to say that two negating terms cannot occur in one sentence. First and trivial, there are "Niggish" constructions like

"I ain't got no money",

which sound rather natural.

However, the case that got me curious is the use of "either" which I consider a "negating term". So, a perfectly valid example of two negating terms going in a row in English can be seen in:

-- I'm not a linguist.

-- Me neither!

Moreover, one can notice that it takes an effort to put a non-negative term in place of "either" and the following sequence

-- I'm not a linguist.

-- Me too.

# Priorities

Always knew that with 99 right choices I tend to pick the 100th wrong one. Now's first attempt to formalize it: all people suffer, all people work hard, but successful people choose their problems so that each iteration after the hard work comes a comparable reward. I however make such choices that my work never gives me any satisfaction and only leads to even more load. I've seen many people having lived their lives like that and to my knowledge none of them were happy nor successful in any way. The weird thing is that I still feel like I'm doing what's right, so I really can't complain.

# Further procrastination

• Just learned Sussman (author of SICP) also authored a monograph on differential geometry

• And on classical mechanics

• Moreover, the former followes the concept of Turtle Geomtry and states in its Prologue the approach I admired most since my early childhood: learning things by programming them, thus forcing oneself to be precise and exact in judgements and claims. I'm recalling right now again that first "lecture" on elementary notions of set theory the summer before admission to VSU... Constructing function as a set so it becomes more "tangible" an object. The Katharsis that followed. I didn't realize back then that it's same as in programming. Five years I've been living with guilt and shame that I started as a coder and not a Mathematician. Five years I felt programming is disgusting and despisable thing to do. And only now I truly realize that the thing I loved about it in those first years is the same thing I've fallen in love with Mathematics for that summer of 2014.

• Also stumbled upon a tweet mentioning the following interpretation of Laplace operator as measuring average sign of a function around the point. Sort of trivial, and resembles how we derive sufficient min/max conditions, yet I did not notice.

• Majority of these I found in: JAX cookbook

• Update! Accidentally found these slides by Absil giving some historical propspect on the subject

• For instance, the slides mention Luenberger (1973) stating that "we'd perform line search along geodesics... if'twere feasible". Now we're closer to the roots of the whole thing

# MD is not RSGD, but RSGD also does M from MD

The whole idea of trying to parallel mirror descent with following geodesics as in RSGD has come to naught. And not the way one would expect, because MD still seems "type-correct" and RSGD doesn't yet. Long story short: in RSGD we're pulling back COTANGENTS but updating along a TANGENT.

Update! Before updating, we're raising an index of cotangent by applying inverse metric tensor, thus making it a tangent! Thanks to @ferrine for the idea.

Following $\mathbb{R}^m\to\mathbb{R}^n$ analogy of previous posts:

\begin{equation*} F:M\to N, \end{equation*}
\begin{equation*} \xi = F^*\eta\in\mathcal{T}^*M,~\text{for}~\eta\in\mathcal{T}^*N, \end{equation*}
\begin{equation*} X = \xi^\sharp = g^{-1}(\xi) = g^{-1} \xi^\top~\text{so it becomes a column}. \end{equation*}

# WEB and robustness to disconnects

Entered metro and the moment my local instance of nikola auto sent an update to the browser it erased all rendered mathjax and printed out plain tex. This shit decided if it couldn't check whether mathjax has changed since a minute ago it should simply crash. Yup, crash -- because that's what it basically is. I hate WEB and WEB developers. Making too many unjustified assumptions, both about the network and the clients. Too neglecting. Too naive. Why should it exist? Why should I be bound to it? Definitely not because I'm the one who should fix it, I do not want to fix it, I do not want to have anything to do with this mess.