Better terms for mathematical concepts

Mathematics, like most other human fields of study, uses a jumble of terminology accumulated over time. I personally found it to be a stumbling block.

It has gotten so bad in later years, that a term like ‘Gram-Schmidt orthonormalization’, sounds to me like ‘Lack-of-Imagination orthonormalization’. ‘Riemann integral’ becomes ‘Lack-of-Imagination integral’. ‘Stone-Čech compactification’ becomes ‘Lack-of-Imagination compactification’. And so on.

Maybe that’s just me.

Nonetheless, terrible naming in mathematics is a disease that impedes progress. Improvements could be made.

I imagine that good mathematicians are capable of more imaginative thinking. And while it is a fair point that the concept is more important than nomenclature or symbology, that does not imply that the name is of no use, or of no consequence.

The name or symbol used for a mathematical concept is not of mathematical importance. Indeed, it is good practice in mathematical writing to explain the terms and symbols being used, as opposed to assuming that the reader is familiar with your use of them. But often in a field of study, a term becomes fixed in its usage, or becomes so commonplace that it is usually not explained.

Any outsider, even another mathematician, picking up the literature is faced not only with the task of understanding new concepts and facts, but of memorizing a new set of terms.

Good names for things can be very helpful. A good name can bring the thing to mind.

My concern is primarily with relatively modern terms that are not suggestive of their concept. The most common unsuggestive terms employ some person’s name. Another kind is not just unsuggestive of the concept, but has a completely different interpretation, or even carry derogatory connotations. Another particularly poor choice is the name of a letter or symbol that has been used to denote the object—that is brain-dead unimaginative, and can have the effect of removing the symbol from general use.

It is objectionable to name any important mathematical result after person, even if they were in fact its originator. The concept is more important than the person.

The facile explanation for attaching a person’s name to a mathematical concept is that the name is that of the “inventor” or “discoverer” or “first publisher” of the concept, or the “first prover” of a theorem. Typically, this explanation does not bear the slightest scrutiny: typically, the person whose name was hijacked bore none of those relationships to the concept. Typically, such names are a historical accident.

Another offhand explanation is that we do homage to these great people by slapping their names onto things we deem important. That is silly and pathetic, besides being wrong. Most of the name-slappers have never bothered to find out anything at all about the slappees. It is false that they mean to do homage to the great one. It is false that they have done them any honor.

So what, oh what, can the real reason be?

If you find delight in attaching a person’s name to a mathematical object, or in knowing which object is associated with a person’s name, I challenge you to consider honestly just where the pleasure comes from... because it is certainly not for the reasons we usually hear.

A perversion has arisen, whereby a purported originator’s name is attached to every mathematical object — as though the thing isn’t properly named until a person’s name is attached. You will see this in Wikipedia, especially: when there is a choice between a descriptive name and a name of a person, the latter is almost always taken as the official Wikipedia page.

Some purists in this perversion go so far as to concatenate the names of anybody accused of originating such monstrous hybrids, as “Abel-Jacobi-Liouville Identity”, “Gauss-Ostrogradsky theorem”, etc. etc. As though not to anger the gods, or the ghosts, by omission.

But I say, the aim of this stupidity is to make what is already difficult even more arcane, to make boobs feel smart, because they know a weird foreign-sounding name for something.

It is boobery and sabotage, besides being transparent name-dropping; it is putting the babble into technobabble.

In the writings of the scientific explosion that followed the Renaissance, authors often referred to ideas by the name of the author. But in those days, they were busy sorting the ideas out, and they were reading and referring to the original works of other authors, and they were often personally acquainted with the original authors. Writing to his friends Smith and Jones, Mr. Williams would speak of “Brown’s method”… to distinguish it from Smith’s method, or to indicate the thing they had all read in Brown’s recent communication. They were not in the business of naming things at the time.

Mathematicians’ names were attached to these mathematical things by later pedants and brown-nosers.

Fawning faux-honor is only one of several bad choices employed by the imagination-challenged. For example, observe the many mathematical terms smeared with praise: ‘special’, ‘generalized’, ‘fundamental’, ‘principal’, ‘golden’, ‘perfect’. The geometric meaning of the term ‘normal’ is lost in most of its modern applications, where it echos the social use: ‘not objectionable’. Almost as often as the vapid slather comfortable things in praise, they dismiss uncomfortable reality with petty insults: ‘imaginary’, ‘complex’, ‘improper’, ‘defective’. My personal favorite is ‘pseudo-’. Of course, there is no end to the ways people display thickness. On Earth, a vacuum is quickly filled by dust and trash.

The most ubiquitous letter used for a mathematical concept is of course π. Many people know the letter only as the geometrical ratio. (Well, it isn’t completely hijacked: mathematicians regularly use the letter for other purposes.) But what are you gonna do? Just avoid such things in the future.

Regarding ancient mathematical terms: Besides having stood the test of some serious time, besides pervading the literature for centuries in many languages, they are typically quite descriptive, when their sense is understood in their original language. In the old days, they knew how to name things.

Often, quite different terms are used in different languages for the same mathematical concept. It behooves us to look around, and see if somebody had a better idea.

Better names ought to suggest a concept.

A good name does not invoke some identity with some True Thing, some ideal. So ‘wave equation’ would denote an equation representing wave-like phenomena, but not the True Essence that is truly a True Wave. The goal here is a rational nomenclature, not some half-baked mystical identity of a term with an ideal.

A good, suggestive name needn’t constitute a description. It might contain a partial description, though. See especially “Platonic solids” below.

An objection I have heard many times to this activity is that “it has a name, it’s over”. “Re-naming it would be confusing.” If you think mathematical terms can’t change over time, you haven’t been around the block. It is true, re-naming a thing does cause some trouble for a while. But many of the terms I mention here are called different things in the literature of different fields, and some of them go by several names in parallel within one field.

The following are just suggestions. Please write to me with your own. Or explain politely and logically any weaknesses you perceive in mine.

This is a work in progress. I have a list several times as long of similar poorly-named terms. Time, as ever, is limited.

And I really don’t expect that this will change the literature… that does not stop me trying. Hope springs eternal.

complicated technical theorems

Every field has technical theorems whose statement is complicated, which perhaps refers to several things and which have several clauses connected with complex logic.

Is there a “good” name to assign a theorem, one which somehow suggests its content, but which is also succinct? A name that is shorter than a line of text?

My first reaction to this issue is that complicated theorems are themselves another kind of bad thing. A theorem should be a strong, clear statement on a very restricted topic. If you can’t say it in a couple of simple sentences, maybe it decomposes into multiple useful statements.

As to naming things descriptively, a typical technical theorem involves set-up whose purpose is to encapsulate the context of the theorem into its statement. Consider the case of the group theory theorem that goes by Lagrange’s name. It is about finite groups, so within that context, there is no need to explicitly repeat the phrase “finite group” in a descriptive name of the theorem. Outside the context of finite group theory, the words “of finite groups”, appended to the name of the theorem, specifies the context.


Terms that don’t need fixing

The majority of terms used in mathematics are pretty good. Some show the strokes of brilliance that you might expect from a practitioner of such a brainy science.

ancient terms

I don’t think it’s helpful to alter ancient terms, even if they derive from languages that few people know. Most of these absolutely pervade all literature, and have stood the test of substantial time. Besides, I don’t think it hurts to learn a few words of a dead language.

point, line, area, volume, parallel, perpendicular,
(names for basic geometrical shapes and operations on them)
axiom, theorem, lemma etc.
sinus, cosinus, tangent

as well as relatively recent words such as

algebra

nationalism

It seems obvious to me that it is best to first try the present language for a term for a mathematical concept.

This has not always been the consensus. Until the 19th century, most Western writers were writing in Latin, and had some familiarity with ancient Greek as well. It was natural for them to make use of those common sources.

In English, some fields of math first picked up terms from German and French, as the seminal writings of those fields were in those languages.

These days are over. Nowadays, international writing is generally done in English.

Nationalist movements regarding language have often proven destructive, often producing very bad terms for new things. The goal is not to produce just any old term in our native language. A really long-winded or misleading term in the national language is clearly worse than a simple, clear foreign word. And learning words in a foreign language is generally a good thing, which for smart people, is worth encouraging.

As in all things, one should examine all options, and make a balanced decision.

already good terms

number line, topological compactness, measure, (algebraic) kernel

calculus: this term of Newton’s is just beautiful. calculus is a pebble. An ancient means of reckoning employed pebbles, and Newton remarked that he had picked up a pretty pebble.

calculus of variations, Lagrange’s own term, succinctly expresses the generalization of the calculus to the situation where a function is undergoing variations. This field is called by other names, depending on what is being emphasized: none of which is very bad, for instance: distribution theory

torsion group Every finite group is a torsion group, because it twists back on itself. E.g. modulo arithmetic.

cases when an author’s name suffices

I see nothing wrong with an author’s name being attached to an idea which proves to be false or of little further use, or whose truth or usefulness remains debatable — provided they are indeed the originator of the idea. Conjectures fall into this category.

Another category of objects that ought to retain the original inventor’s name are complicated constructions which defy a simple description. For example, the Peano curve (or Peano’s space-filling curve) is a very nice thing, decidedly the invention of Peano himself, for which a suggestive name would be unwieldy at best. (I imagine.)

A particularly clever proof might also retain its inventor’s name. A proof is a construction. (Unless there is some notion of a simplest possible proof…)

Likewise, a complicated heuristic that defies simple description, and which was introduced by a certain person, might well be known by that person’s name.

Zeno’s paradox is a little puzzle that was beyond the tools of reasoning of his time, but which has since been dispensed with.

The word algorithm derives from the name of the Persian polymath al-Khwarizmi (الخوارزميّ). I don’t know a simple way to describe the concept, and the word is pretty entrenched. The Latinized form of the word has even been back-ported into Farsi: الگوریتم. Could it have been done better, though? German has a Germanic term invented for this, Rechenvorschrift, which isn’t in itself bad (but only used in official documents — Algorithmus is the common term). It comes into Latin-English as “reckoning prescription” or “calculation specification”; both are clumsy for an important, often-used term.

The interest in Fermat primes is that the main conjecture about them has eluded proof. But that conjecture essentially states the Fermat primes are uninteresting! (On the other hand, if the main conjecture is proved to be false, this sequence might prove in itself very interesting. So we should invent a name to reflect why they are interesting!)

What about Fermat’s last theorem? It’s a negative statement that amounts to “don’t bother looking here” — but proving it consumed the lives of many people. It had a big effect, and resulted in a great deal of math that may well be useful, but the statement itself is a dead end (best as I can tell.) I’m happy with leaving his name on it.

There are many examples of conjectures about non-sharp bounds, which I wouldn’t regard as essential mathematics. They will presumably become items in a historical record. I just don’t care if they retain the original publisher’s name — indeed, in a largely historical role, the publisher’s name is useful.

about honor and homage

If you really want to give credit to a researcher in your publication, please insert a footnote “First studied by … in” followed by a reference. Then you will do the researcher the honor of inviting your readers to look at their work.

If you’re writing a text book, please take the time to write a short history of the subject. A page or so at the end of each chapter, or footnotes, are often used for this purpose. This is the right way to instill appreciation for the hard work and talent of our forebears.

Promulgating a misconception that a certain guy invented or owns something that they did not invent, and that nobody owns, does homage to nobody. It is a shame.

geometry

Pythagorean theorem

This fact was old before Pythagoras or the Pythagoreans got their hands on it.

The basic fact was recorded by the Babylonians as early as 1800 BCE. The statement is present in ancient Indian writings (the Baudhayana Shulba Sutra, thought to be from 800-500 BCE) as well as a geometrical sort of proof. And perhaps later, but probably independently, the statement and reasoning for it appeared in the Chinese book, 九章算術 (The Nine Chapters on the Mathematical Art), which was compiled before 170 CE. It is therefore unsurprising that, in India and China, the theorem goes by names other than that of Pythagoras.

The theorem is the primary relation of lengths to areas. So length-area theorem is an option. It is also the primary theorem about right triangles. So something like right area theorem would do. But I can think of nothing better than

hypotenuse theorem

The scarecrow would put his finger to his head and nod enthusiastically!

Euclidean plane

— emphasizes rather different things in different fields. In basic geometry, refers to Euclid’s axioms for the plane. In some fields, a norm is emphasized.

Why not just ‘plane’? In English, it has two problems: first, that it is itself very general and has other meanings (as noun, adjective, and verb) and second, it sounds like other words, spelled ‘plain’, which have different meanings. It’s too messed up.

One hears ‘plane surface’, but that doesn’t suggest the unbounded nature. ‘normed 2-space’ in linear algebra should suffice to identify it, but it’s messy and jargony.

geometric plane

is a good term for the geometric concept. It rules out non-flat geometries and higher-dimensional objects as well as anything else does.

flat manifold

serves to specify the same object, undecorated with geometric notions.

Euclidean/non-Euclidean geometry

flat/non-flat geometry

Riemannian geometry / manifold

non-flat geometry / manifold

Platonic solid

These were, of course, not Plato’s invention; nor did he do mathematical research on them. They had been known to Greek geometers for centuries by his time, and they were not known in ancient times by his name. He toyed with a relation of these shapes with the “classical elements”, and so medieval philosophers associated them with him.

That some guy made a mystical mess of lovely mathematical objects is surely among the worst reasons for the objects to bear his name.

On the other hand, what is it that makes these exactly 5 shapes special? What would be a succinct description of them?

Precision of the concept that leads to these five shapes is the subject of whole books. Imre Lakatos’, in his Proofs and Refutations, portrays various attempts to characterize the Platonic solids as ‘regular’, or as ‘good’ or as ‘non-monstrous’. The point here, however, is not to provide a useful mathematical characterization, but rather, a suggestive name, a label.

I would prefer something more compact. For this,

regular convex polyhedron

leaves little to the imagination.

spiral of Archimedes

The radius increases at a rate proportional to the angle.

proportional spiral

Möbius strip / band / loop

There are images of twisted bands from ancient times. The shape is easy to make, even by accident. Of course, people in ancient times knew that it only had one side.

Möbius did publish a mathematical memoir about it, saying he had investigated it in 1840, but had been preceded in publishing and investigation by Listing.
See: Biography of Möbius at U. St. Andrews School of Mathematics and Statistics.

(Britannica tells the story the other way around. See: Mobius strip.)

A descriptive name is easy:

band of one twist

Cayley transform

Cayley in fact studied a specific generalization of a simple formula, a ratio of two linear terms, to sets of matrices. But this generalization is part of a bigger thing, which appears in several areas of math. (See: Möbius transformation.)

There is also a history of calling it a ‘bilinear transformation’ (due to the formula having two linear function terms.) This is objectionable because it fails to mention the fraction that is involved, and because the term ‘bilinear’ is in common use to mean something rather different.

It is a special case of a transform which has a simple descriptive name:

linear fractional transform

Desargues theorem

A statement of the theorem is: If two triangles are perspective from a point, they are perspective from a line, and conversely.

I don’t know any other important theorem confusable with:

point-line perspective theorem

analytic geometry

Cartesian plane

While Descartes does seem to have been the first to have published the idea, it had been used for some time previously by mathematicians, notably, Fermat.

The idea is to break the geometric plane down, by assigning numerical coordinates to each point. That is, to analyze the plane. There is a good descriptive name.

analytic plane

Cartesian coordinate system

In two dimensions, another term is better:

rectangular coordinate system

In any geometry supporting a notion of orthogonality, so is the term

orthogonal coordinate system

Cartesian axes

— as opposed to... what sort of axes? We don’t talk about axes with polar coordinates. We commonly say

x- and y-axes

but that presupposes names for the axes. What would be better? Doesn’t the term

coordinate axes

cover all the cases?

Julian proposes that, at least for geometries with a property of orthogonality, to say

orthogonal axes

folium of Descartes

I read that the connection with Descartes is that he proposed to Fermat the problem of finding its tangents lines (which Fermat solved easily.) The other word is just Latin for ‘leaf’.

It does have a little loop which suggests a leaf. But it is also infinite in extent, and asymptotic to a line, which is decidedly un-leaflike. Besides that, it is algebraically cubic.

There are so many combinations of those properties… and so many things nearby… Maybe

cubic asymptotic leaf-curve

Descartes’ rule of signs

The statement of the rule was first published by Descartes. But it is very easy to prove.

Within a context of polynomials on the number line, there are some other more complicated theorems regarding signs… but none is a simple ‘rule’. So it should suffice to refer to this as the

rule of signs

Outside the context of polynomials on the number line, the context can be specified, as “rule of signs of roots of polynomials on the number line.”

affine function / geometry

— not a terrible term in itself, and its provenance is superb. But unfortunately, as much as the term’s originator argued for it, the connection to the concept is vague, and more unfortunately, modern recognition of Latin has reached a point where very few would recognize the meaning.

But the worst issue is that it is terribly confused with the term ‘linear’. In analytic geometry, a linear function is one whose graph is a line, irrespective of whether that line passes through the origin. But in linear algebra, a linear transformation must map the origin to itself.

The term ‘affine’ serves to fill the gap in linear algebra, by providing a term for transformations consisting of a translation of a linear transformation. But this is a hard lesson for beginners, who had first learned the term ‘linear’ to mean something else.

Euler introduced the term in his 1748 book Introductio in analysin infinitorum (Bd. II, Kap. XVIII, Art. 442), where he explicitly defines the term and defends its use. The meaning of the term is apparent in its relation, ‘affinity’ (from Latin affinis, ‘related’) — the idea being that the points of a geometrical figure translated in space are thus related to the original points.

As to ‘affine geometry’, I read that the term was coined by Möbius in Der barycentrische Calcul, (Leipzig 1827, S. X und 195.). (But I failed to find it there. He writes affine Curve, affine Figur, affines System.) Felix Klein’s Erlangen program recognized affine geometry as a generalization of Euclidean geometry.

I don’t know where to find a better term. But it is important to warn students of the confusion, and to enlighten them as to the origins.

kinds of numbers

complex number

— a terrible name for a pair… like saying a couple is a “complex person”. It doesn’t say how complex — and there are other numbers that are much more complex than these. Even worse, ‘complex’ has unintended negative, off-putting connotations.

One could say ‘pair number’, but that doesn’t suggest the geometric aspect, and moreover, is subject to confusion with ‘pear number’ or ‘pare number’. I wish ‘plane number’ were an option, but unfortunately in English it could be confused with ‘plain number’ which would cause even worse problems, or even misunderstood as ‘airplane number’. But there’s an easy option:

planar number

real number

— suggests other numbers aren’t ‘real’, which is ironic, because philosophical arguments have been raised against the completion of the number line itself, and against any completed infinity whatever. The “real” numbers require some imagination, too. A better name is

line number

which clearly indicates a member of the number line.

imaginary number

— one of the worst-named concepts in all of math. (Granted, it took some imagination to come up with the numbers, but the name is a rejection of imagination.) But what would you call them?

The term was introduced by Descartes, but it is clear that he was at a loss for a better word when he did so. He was explicitly denying that any quantity could behave this way. (Of course, the problem is in thinking that these things should be quantities.)

Here’s what Gauss had to say about it:

“If one formerly contemplated this subject from a false point of view and therefore found a mysterious darkness, this is in large part attributable to clumsy terminology. Had one not called +1, −1, √−1 positive, negative, or imaginary (or even impossible) units, but instead, say, direct, inverse, or lateral units, then there could scarcely have been talk of such darkness.” — Gauss (1831)
Theory of biquadratic residues. (Second memoir.)
Commentationes Societatis Regiae Scientiarum Gottingensis Recentiores. 7: 89–148.

So Gauss suggests

lateral number,

which isn’t bad.

One could say vertical number… but it suggests an inaccurate relation to direction. Then line numbers in the plane would be ‘horizontal’ Other possibilities: right number, alternate number, spouse number are each rather weak.

It was once common to speak of ordinate and abscissa of coordinates, usually y being ordinate and x being abscissa.

ordinate number!

In which case, the x axis would be the locus of the

abscissal numbers!

or we could continue calling them line numbers.

It’s Latin, but at least it’s not humbug. And either ‘lateral’ or ‘ordinate’ saves a syllable over ‘imaginary’.

Of course, there are traditions of exchanging the x and y axes of the complex plane. So ‘ordinate’ could go either way.

The multiplication operation of complex numbers is more essential, however, than their representation in the analytical plane. Multiplication by i is a rotation. This suggests:

rotational numbers

The fact is, there are multiple aspects to planar (e.g. complex) numbers. I guess no one terminology adequately captures everything.

When speaking of the analytical plane, my preference is to call x ‘line numbers’ and y ‘ordinate numbers’. In reference to mappings, ‘rotational numbers’ is preferable.

Gaussian integer

It may well be true that Gauss invented these. Nevertheless, a much more descriptive term would be:

planar integer

Cayley-Dickson construction

— of quaternions, octonions, and other hypercomplex numbers.

Cayley and Dickson were in fact involved in the early development of these algebras, and of this particular construction. More descriptive names for it include:

doubling procedure

and

conjugate construction.

number theory

Mersenne number / prime

2-power-less-1 number / prime

Diophantine equation

integer algebraic equation

Pythagorean triple

These were known at least a millennium before Pythagoras.

triple of powers of integers

numbers

Fibonacci numbers / sequence

— were known in India hundreds of years before Fibonacci, although he did advance knowledge about them.

If ‘golden ratio’ is an acceptable term, then ‘golden sequence’ would work here. But what does gold have to do with it?

Fibonacci himself introduced the sequence in terms of an idealization of the reproduction of rabbits, as it is often still introduced today. So:

rabbit numbers / sequence

golden ratio / spiral / triangle

Golden in this context just means really nice. That gives no clue as to what the ratio is.

It was a later misnaming. Euclid called the ratio

extreme and mean ratio.

self-similar ratio / spiral

Pascal’s triangle

— was studied by mathematicians centuries before Pascal. For this reason, in Farsi, it is called Khayyam’s triangle, in Chinese, it’s called Yang Hui’s triangle. Rather explicitly:

triangle of sums of predecessors

or maybe more briefly and poetically

parent-sum triangle

calculus

derivative (of a function)

It’s long and Latin and not very suggestive. But I don’t have a better word.

The term was used by Lagrange, in the sense of something derived from, or re-channelled (the “rive” in “derive” is related to “river”). That’s not much to go on, as a suggestive term.

The term “linear approximant” is more descriptive, but long and jargony, and no less Latin.

differentiate, differentiable

This word is Leibnitz’ invention — all five Latin syllables.

It suggests the taking of differences, which is good.

I have no better suggestion, though.

fundamental theorem of calculus

I concede the essential role of the theorem, but the name is naughtily nondescriptive. The term ‘fundamental’ is emphatic in nature, not descriptive.

What the theorem establishes is that one operation as a kind of inverse of another. So:

inversion theorem of calculus

You might stick ‘fundamental’ or ‘primary’ or ‘basic’ in front of that, if there is risk of some other theorem about inversion.

Leibniz’ rule

The good old product rule had appeared before Leibniz published his demonstration of it. But in any case, it is better to call this essential property of differentiation by its familiar name:

product rule

Another rule, sometimes called the “generalized Leibniz’ rule”, is just the rule for powers of a product. So directly:

product powers rule

improper integral

— another example of a pejorative employed to cover for inadequate imagination. As though such integrals were somehow dirty or incorrect or just not done.

This is especially silly nowadays, when the notion of infinity has long since lost its blasphemous aspect.

There are quite positive, descriptive terms in common use:

integral over a half-line; integral over the whole line

Rolle’s theorem/lemma

— the statement that a “real” valued function defined on an interval that takes the same value at each end, has derivative that is zero at some point in the interval. It’s an easy corollary of the mean value theorem. It is just one step away from the extreme value theorem.

Does it really need a name?

Rolle proved it only for polynomial functions. Cauchy was the first to prove it in modern generality.

It is a special case where the function takes the same value at both ends. What would one call such a function? The multivariate version of the statement has the the function constant on the boundary of a region.

Newton’s method / Newton-Raphson method

— did not spring out of thin air, and a long sequence of developments preceded the form by which we know it today.

tangent method

This makes a nice pair with ‘secant method’.

Taylor series / MacLaurin series

Both imply a function; the first implies a point of expansion. Often written out “Taylor series expansion of a function about a point.”

power series

is more general. But in this context, “power series expansion of a function about a point” is synonymous with Taylor series. And in common use “power series expansion” subsumes both Taylor Series and MacLaurin series expansions.

The action “expand as a Taylor series” is synonymous with “expand as a power series”.

The only time that ‘Taylor series’ is more economical is in talking about these things in the abstract. There, ‘power series expansion’ is necessary, to distinguish from the generality of power series. But in any application, “power series for” is just as descriptive as “Taylor series for”.

Note: Taylor seems to have been the first to publish a derivation. Both Newton and Gregory knew of the power series expansion previous to Taylor’s publication. MacLaurin used a special case of it and attributed it to Taylor.

Laurent series

— just a generalized power series, where the powers can take negative integer values. The term “generalized power series” is better. Or even “integer power series”.

But it seems to me that it is not the fault of these series that the more popular series are limited. How about calling the other ones “nonnegative power series”.

Stieltjes integral (or Riemann-Stieltjes integral)

— is such a natural, easy thing, one that opens up so many possibilities, that I cannot fathom why it isn’t taught in first-year calculus courses. Surely, many more inventive students accidentally invent it themselves.

It binds integration by parts with change of variables, turns the hoodoo of ‘delta functions’ into a triviality, and opens a door to important topics in distribution theory and functional analysis.

Yet I was not formally introduced to it until the second year of graduate studies — and as an optional chapter at the end of a book, at that. What’s worse, looking back on it, I see that many of the books I studied could have been greatly simplified by adopting this simple idea. It is clear to me the authors were simply ignorant of it.

The Stieltjes integral is simply the formal definition of integration of one function with respect to another. It is often simply called the

integral with respect to a function

The naked dx integral that we learn first is then the “integral with respect to the identity function”.

(Calculus text authors: get with the program, or get out of the business!)

Euler’s number

— is used only in discussion of logarithms and exponentials. We have ‘natural logarithms’, of which it is the base. It is “natural” because it is the base for which the derivative of the exponential gives the same exponential.

So the natural term for the number is the name by which it is often called:

natural base of the logarithms/exponentials.

Euler (or Euler-Lagrange) operator

The term ‘Euler operator’ is used by different people to mean quite different things, whenever they need to make a simple thing look fancy. One common use is simple terms such as d/dx, which are commonly called

differential operator

vector calculus

Gauss’ (or Ostrogradsky’s) theorem/formula

— it seems, was first discovered by Lagrange... Come on. WHY? The obvious name for this is:

divergence theorem/formula

Stokes’ theorem/formula, Kelvin–Stokes theorem

Stokes was a great guy, but he in no way invented this thing or first published it or greatly expanded on it. The story of the attachment to his name to it is: as a professor he liked to put its proof as an exam question, and some of his students (who later became famous) referred to it as “old Professor Stokes’ theorem”.

This is an especially stupid excuse for the name of an important mathematical statement.

These days, it refers to two related things: originally, to the rather easy extension of “Green’s Theorem” to curved surfaces, and later, with the qualification “generalized”, to the analogous theorem in any number of dimensions.

Besides the abstraction, the function in the theorem is used to neatly represent physical phenomena which might provide a name:

general circulation theorem/formula

or even

general curl theorem/formula

In Italian, it’s teorema del rotare

Green’s theorem (sometimes Green-Riemann theorem)

Green is justly blamed for originating multidimensional integral theorems. However, what he derived was a two-dimensional divergence theorem: the statement that is captioned “Green’s theorem” in modern textbooks was first published by Cauchy, much later.

But again, the function in the theorem neatly represents a geometrical/dynamic notion:

vortex theorem

An obvious name for this theorem would be

circulation theorem

since after all, one side of the equation deals with a quantity called ‘circulation’. However, a special case applied to ideal fluids is sometimes called “Kelvin’s circulation theorem”. That statement was published many years after Cauchy had proved the more general case.

(Side note: this theorem is often described as a “special case of Stokes’ theorem”. I would rather describe the latter as a mapping of this theorem onto a smooth manifold in 3 space.)

Green’s function

This is with regard to a differential operator.

integral kernel of the inverse (of the differential operator)

Helmholtz’s theorem (aka fundamental theorem of vector calculus)

The theorem is that any smooth, integrable vector field in three dimensions splits into a sum of an irrotational field and an incompressible field.

Helmholtz does seem to have first published the theorem (although it isn’t the only theorem he published). His name has been the target of numerous name-droppers... and they have cheerfully tacked his name onto other theorems. The theorem is pretty darned important.

These are both poor names for an important theorem. Much better:

splitting theorem (of vector calculus),

or, if Latin is preferred:

decomposition theorem (of vector calculus)

Jacobian matrix / derivative

— taken in context, is just the

derivative,

which has the same number of syllables, and doesn’t obfuscate the connection with the elementary derivative. (Which is a good thing, unless you’re a proponent of obfuscation.)

The term is used variously to refer to the matrix and to its derivative. That is needless confusion. Let’s call its determinant the

determinant of the derivative.

Wronskian / fundamental matrix

This form does appear to have been first published by Wroński in 1811, as reported by Muir. That same historian is also responsible assigning a name to the thing. (See A Treatise on the Theory of Determinants, (1928) Ch. XVIII.)

Within a context of vector-valued functions of a scalar variable, the term

matrix of derivatives

has only two obvious interpretations. They are transposes of one another. Of course, the term ‘Wronskian’ suffers from the same ambiguity.

Fréchet derivative

It does seem that Fréchet was the first to publish this usable formulation of a derivative for functions mapping one vector space to another.

Within the context of such functions, the derivative is dictated by its geometric conception: the derivative is the derivative. There is no need of further specification: it is just the

derivative.

(See Jacobian matrix.)

Gateaux derivative

As with the Fréchet and Jacobian derivatives, there really is no option of derivative within the concept, for which there was already a suggestive name.

directional derivative.

L’Hospital’s rule

— deals with limits of ratios where the ratio of the limits is indeterminate, formally 0/0 or ∞/∞.

rule of indeterminate forms

(And then maybe we can stop arguing whether we should spell his name in the new French orthography or that of his time.)

linear algebra

Gaussian elimination (and Gauss-Jordan elimination)

— was described thousands of years before Gauss or Jordan. In particular, the procedure is described in the Chinese book, 九章算術 (The Nine Chapters on the Mathematical Art), which was compiled before 170 CE. These names are used to refer to many variants of the basic procedure, which are also commonly called

row reduction.

The procedure is even more generally described by

elimination of terms.

A further operation, that of ensuring the leading term of each row is 1, is applied by some authors to heap honors on both of the great men. The resulting matrix is then in “reduced row echelon form”, which is itself objectionable: what exactly was “reduced”?

Laplace expansion (of matrix determinant)

cofactor expansion.

Binet-Cauchy identity

dot-of-cross-products identity.

singular matrix / operator

In the vernacular, the term ‘singular’ signifies something extraordinary, surprising, perhaps one of a kind; usually in a positive sense. Applied to matrices, the term is not enlightening as to the way in which the matrix is extraordinary, and is misleading as to the desirability of the property.

The opposite condition, that of an invertible matrix, is to be regarded as positive in most applications, and it is explicit as to the matrix property. So the common term is preferable:

non-invertible matrix / operator

P.S. Some authors assign a distinct technical meaning to the term singular (e.g. for a matrix, not having a complete set of pivots) only to proceed to show that the meanings amount to the same thing. In these cases, it strikes me as an unnecessary profusion of terminology.

determinant

It appears that this term was introduced by Cauchy, in his Latin writings, in which it is used as an adjective. The function he named had previously appeared in many publications, though. [See Muir Theory of Determinants in the Historical Order of Development, v. 2. (1911)]

(Note that the terms ‘determinant’ and ‘matrix’ were often used as synonyms, in the 1800s, before the advent of matrix algebra.)

The determinant does indeed determine several things.

The main problem with the term is, which property of a system of numbers is being determined? That is a minor complaint. I can think of no better word in English for this function.

Part of the difficulty in naming the function is that it it does two geometrically different things: it is the product of two independent functions. The absolute value of the determinant is the amount by which the measure of a solid figure is changed by the linear operator, and its sign answers the question of whether the linear operator reverses the sense in which the vertices of a figure are traversed.

The one function might be called

extension

The other function might be called

parity.

fundamental theorem of linear algebra

The exact meaning of this phrase varies, but generally it refers to a collection of equivalent statements regarding the rank of a mapping between two finite-dimensional spaces.

Other terms in use suffice to identify the theorem:

rank-nullity theorem

or even

rank theorem

orthogonal matrix

This is a pernicious example of a term that seems to refer to one thing, but is used to refer to something rather different.

For, by sad current convention, it refers to matrices whose columns are orthonormal. There is no current term for matrices whose columns are just orthogonal, although such matrices arise very often. In a less perverse world, matrices whose columns are orthogonal would be called “orthogonal”, and ones whose columns are orthonormal would be called “orthonormal”.

The best solution here is a radical term-ectomy. The plan for the operation will be: the reformers should gather en masse, with banners, bullhorns, and torches, and a change of this term in their slogans and on their list of demands.

For some time, textbooks will contain humorous footnotes lamenting that dark, backward period of history where the terms were conflated, and how a mostly bloodless revolution had led to a reform that erased an egregious, buttheaded stumbling block, and (regrettably, but understandably) pilloried recalcitrant professors. Eventually, the old books that propagated this terminological mistake, together with reactionary professors, will be relegated to the object lessons of history.

Hamel basis

Schauder basis

This is the extension of the notion of basis to infinite-dimensional spaces, where the notion of linear combination is extended to include limits of linear combinations. But even this term is unclear, some authors restrict discussion to countable bases, and others do not.

The distinction is one of topology of the space in question, and that is a matter of context. Within the context, all of them are just a

basis (finite, countable, uncountable).

defective matrix

— a negative term, suggesting something to discard. (The German word is no better: entartet.)

Geometrically, such matrices effect a shear in some direction, or equivalently, that they are composed of a shear. Thus

sheared matrix

(the perfective form serving as a distinction from a matrix that is nothing but a shear operation.) This at least connects the condition to a kind of operation.

It may also be useful to focus on a positive form of the algebraic property: for the unsurprising, unchallenging property: eigenvektorvoll, or characterizable, and thus the negative properties.

eigenvektorschwach, or hard-to-characterize,

Of course, it is always the case that some higher power of any such matrix has eigenvectors, so it would be wrong to call them “non-characterizable”.

eigen- characteristic-

The modifier ‘characteristic’ is awfully long, and Greek besides. The German ‘eigen-’ is at least shorter, although it isn’t English vernacular.

In English, we can say things like: “I wouldn’t have pegged you for a mathematician.”, or, more commonly, “I wouldn’t have pegged you for a drunk.” How about chunking the big foreign words for ‘peg’:

peg-vector

peg-value

peganalysis

peggable matrix

and instead of ‘generalized eigenvector’,

higher-order peg-vector

(I also considered using ‘shtick’, but a shtick isn’t really a characteristic. And there is ‘trait’, but that’s scarcely more English.)

Cayley-Hamilton theorem

The statement is: A square matrix (over a commutative ring) satisfies its own characteristic equation.

This theorem, so stated, could not have been conceived, let alone proved, by Cayley or Hamilton. Matrix theory didn’t exist in their time, nor did the concept of rings. Hamilton did prove a special 4×4 case, where the matrix represents a quaternion multiplication. I have read that Cayley later stated the theorem for the 3×3 case, and proved it for the 2×2 case. It was Frobenius who later published a proof for the general case.

In a matrix theory context, this would identify it:

characteristic equation theorem,

or, not gaining any syllables over the name-dropping name:

eigenequation theorem

or, why not just abridge the statement (again, as a means to identify the theorem, not to state it fully):

theorem that a matrix solves its eigenequation

Moore-Penrose generalized inverse

— a simple extension to general linear operators of the earlier matrix notion, which goes by the antidescriptive name, pseudoinverse. Now, pseudo-anything is pseudo-imaginative. Surely, the simple

generalized inverse

is suggestive enough in a context of least-squares, or approximation.

But “generalized” is a pretty general qualification. Surely, there is a tighter term... maybe something even more descriptive, something along the lines of closest thing to an inverse, or

inversoid.

Cramer’s rule

This rule was in use long before Cramer published a version of it, and special cases had been published previously. It is a method for solving systems of linear equations, based on the determinant. Natural names would be:

determinant rule

or

determinant method

Gram-Schmidt orthonormalization / algorithm / process (aka. Iwasawa decomposition)

This procedure pre-dates Gram and Schmidt, and it pre-dates Iwasawa even more. Not that it matters — a non-stupid name for it is:

successive orthonormalization

adjugate / adjunct / classical adjoint matrix

The term ‘adjugate’ shares its pedigree with ‘determinant’, but it fails to suggest anything — in what sense is it adjunct, adjoined? Is it a kind of appendix, or parasitic growth? Somehow not as good as the original matrix? In fact, it bears a practical relation to the original matrix, but bears no other connection to it.

The other terms have similar problems, and are are still in use. (These days, ‘adjoint’ is used to refer to a quite different matrix relation, and, for that reason, is to avoid.)

I don’t have a satisfactory improvement on the term ‘adjunct’. It is badly described as the transpose of the cofactor matrix — which only raises questions that raise other questions. It is quickly described as the matrix times its determinant, which trivializes its role.

The ‘adjugate’ plays a practical role as a step in a process to calculate the matrix inverse, which is expressed in the terminological sequence: minor → cofactor → cofactor matrix → adjunct matrix → inverse.

While the matrix term ‘minors’ is suggestive, as it refers to smaller portions of the matrix, and ‘inverse’, since it refers to a matrix that un-does what the matrix does, the other terms of the sequence are poor or bad.

See ‘matrix cofactors’.

matrix cofactors

The term ‘cofactor’ is doubly bad: it suggests something associated with some other ‘factor’. (Maybe there is such an association, but it escapes me what that factor might be.) A better term is:

alternating minors.

I also like

flipping minors,

especially because of the effect it has on British ears.

conjugate transpose

This term is just blithering confusion, and a failure to find the proper perspective.

If a matrix of complex (planar!) values is written with each entry replaced by the real matrix representation of the complex entry, then the simple transpose of that matrix will represent the conjugate transpose.

The other way around: if it is simply accepted that the transpose of a complex value is its conjugate, the recursive transposition of the matrix will be the conjugate transpose.

In any case, drop the silly adjective. It will reduce the mess regarding these topics by half. And it will drive old-timers crazy! Bonus!

If there is any need for a non-conjugate transpose of a complex matrix, please let me know. We can come up with some fancy name for that strange beast.

Otherwise, what should we call the non-conjugate ‘transpose’ of a complex matrix? I propose: mistake.

There is another term, ‘adjoint’, meant to explicitly warn off stupid mistakes. Unfortunately, the word suggests something adjoined to something in some unspecified way. What way? nudge-nudge-wink-wink? This is not much of an improvement.

Cauchy’s (or Cauchy-Schwarz or Cauchy–Bunyakovsky–Schwarz) inequality

inner product norm inequality

or

dot product norm inequality

Hermitian matrix / operator

The common descriptive name is:

self-adjoint matrix / operator

See also the note about the term ‘conjugate transpose’.

Vandermonde matrix / determinant / polynomial

The matrix just has non-negative powers of n nonzero values, with the powers increasing from 0 in one direction, and the values changing in the other direction. One major use is in polynomial interpolation. Its determinant is often called a Vandermonde determinant.

Vandermonde did do some early work with this matrix, but from early on, it was questioned whether his involvement merited an attribution. See A case of mathematical eponymy: the Vandermonde determinant by Bernard Ycart, 2013

Definitions of the term disagree as to whether the values vary in columns or in rows. Perhaps an extra specifier row or column would serve as a clarifier.

matrix of polynomial interpolation coefficients

matrix of geometric progressions

value-power matrix

Gram matrix / determinant, Gramian

A succinct description of the matrix is straightforward:

matrix of dot products

The term ‘Gramian’ is an obfuscation for the determinant of that matrix.

Kronecker product

This was being used by many people around the time of Kronecker. I have been unable to discover why his name is attached to it — probably another accident. But there is a very easy term sometimes used for the same thing.

outer product (for matrices)

Euclidean norm

It’s the familiar distance… as though Euclid invented distance. As though he played with norms. Because its analytic representation involves squares of coordinates, it’s commonly known as the

two-norm

Frobenius (or Hilbert-Schmidt) norm

In German, it’s also called “Schurnorm”.

The way I was introduced to this thing, it looked pretty artificial. Later, I read that it can be expressed as √trace(A*A)

This suggests more useful names, such as

trace-norm

And now that I look at it again… it looks like a very natural extension of the two-norm to matrices. This is the way I’ll think of it and remember it in the future, thanks to this name.

analysis

Riemann sum

Riemann’s name is associated with it because he presented a relatively modern, rigorous definition of the integral as a limit of such sums. The older term for this kind of construction was

quadrature

which itself is misleading, as it suggests a square shape, and the shape involved is often not rectangular. But the idea is that a square is a shape whose relation to area is unquestionable, and one reduces the problem of area to that of the sum of areas of squares.

Cauchy sequence

self-nearing sequence

Cauchy-Riemann equations

In the context where this term is used,

differentiability equations

or

differentiability conditions

would serve only better.

Note: in German literature, they are often called Riemann-Cauchy equations. (Because it’s all about honoring who is greater — and clearly…)

Arzelà-Ascoli (aka. Ascoli-Arzelà) theorem

— a quite technical but useful statement that occurs often in basic analysis. The fact that it packs so much in, makes me wonder if it consists of more easily-digestible chunks. Those chunks could in turn have simple names. But let’s try to play the ball where it lies.

Within a context of real-valued functions on an interval, the statement is: Any uniformly bounded, equicontinuous sequence has a uniformly convergent subsequence. If that must have a name, let it be

theorem of uniform convergence of uniformly bounded equicontinuous sequences

I maintain that a long term is better than having to remember whether the accent is acute or grave plus remembering all those long qualifications. In fact, I understood and memorized the statement, and was very annoyed to further memorize names to go with it.

Bolzano-Weierstrass theorem

Sometimes it’s called the

sequential compactness theorem

I would prefer something more specific:

bounded sequence convergence theorem (of Rn)

complex analysis

— an awfully bad name for a field of study. At the same time, it sounds off-putting and pretentious, or self-deprecating. What a mess! It’s the “Boy named Sue” of mathematics. The full name, “theory of functions of a complex variable”, clarifies at least the reference of the adjective, but it’s very wordy, and no less off-putting.

In German, this field is called Funktiontheorie, but with the modern use of the use of ‘function’, that isn’t helpfully descriptive.

In accordance with the proposal for planar numbers, the obvious choice would be

planar analysis

Here, the adjective properly operates on the other word in the term. To be more explicit:

analysis of functions of a planar variable

Argand diagram / plane

The notion of plotting complex (or ‘planar’!) numbers on an analytical plane is pretty obvious. It’s so obvious, it is unclear to me that it is in any need of a name; there is surely no need to tack someone’s name onto it. Besides, Argand was hardly the first to use it: Caspar Wessel (at least) preceded him in publication.

It’s really more of an interpretation than anything else.

complex (or planar) numbers as points of the analytic plane

Euler’s formula / de Moivre’s formula

Euler did publish the formula in the form we know it today. But it’s important enough to merit a descriptive name.

exponential-trig formula

or

exponential-polar formula

I don’t know who is responsible for de Moivre’s formula. I read it appears nowhere in his published works. It contains an extra variable, which is formally a power in one place and an angle multiple in another. Maybe something like:

exponential-power-trig-multiple formula

Cauchy (or Cauchy-Goursat) integral theorem

There is already a statement in common use in other fields for the integral being zero. The theorem is thereby rendered:

holomorphic functions are conservative.

fundamental theorem of algebra

I am unaware of any proof of this theorem that avoids geometry or analysis, and no proof that is in any sense direct. The theorem is certainly important, but it does not appear to be literally fundamental, and furthermore, it does not seem to be a theorem of algebra, unless algebra is understood to subsume large swaths of geometry and analysis. It is:

Every polynomial with complex coefficients has a full set of complex roots, meaning, if the polynomial is of degree n, then it can be factored into n linear terms, each with complex coefficients.

A little abridgment results in:

a complex polynomial’s roots are complex.

This statement isn’t a complete rendition of the theorem, but to identify the theorem, it doesn’t need to be. Most importantly, distinguishes it from the question of roots of real polynomials. Finally, this statement of the theorem has only one more syllable than its conventional name.

Cauchy integral formula

— is about the integral of a holomorphic function times a pole. How about:

pole integral formula

An alternative refers to its picking out of a function value:

picking-out formula (for integrals in the plane)

Riemann mapping theorem

— surely the most impressive theorem that might be called

univalent mapping theorem

Möbius transformation / group

I guess the connection is between A.F. Möbius and these objects comes from his early studies on projective geometry. It is impossible that he was the first to publish such a transformation. However, in Der barycentrische Calcul, esp. § 134-137, he discusses forms of this kind, which he calls rationalen Ausdrücken, or as a mapping, rationalen Functionen. I haven’t determined when they started being called by his name. This work is known to have influenced both Gauß and Cauchy. (It is telling that even to determine why we call a mathematical object by a certain guy’s name can be so hard.)

The transformations commonly go by several more descriptive names. The fact that they are formally a fraction of linear functions makes terms such as these natural:

linear fractional transformation / group

— rolls pretty well off my tongue, but it is kind of long. Also, to some people, the term suggests incorrectly that the transformation is linear, so they swap ‘linear’ and ‘fractional’, but that in turn confuses other people. I’m not going to call that.

These have also been called “bilinear transformations”, but that term is in use to mean something quite different, and besides, doesn’t say anything about the fraction.

They have also been called “projective transformations”, but that term suffers from a failure to indicate the sort of projection. Maybe something like “spherical projective transformation”, or “projection of the sphere” could work.

They are also a special case of the objects referred to by the projective geometry term:

homography

which is much more compact and geometrically suggestive, although in a Latin-ish way.

Liouville’s theorem

In the context of complex analysis, here it is:

Any bounded entire function is constant.

Somebody explain to me why it helps to hang a guy’s name off that. I would prefer to remember the simple fact. If some jackass wants me to regurgitate some “Liouville’s theorem”, I will retort; “Who’s theorem?” That should put them in their place.

Check out the Wikipedia disambiguation page on Liouville’s theorem. It lists four of them, as well as an equation, a formula, and a class of numbers. Furthermore, the above theorem from complex analysis was proven by Cauchy.

Maybe it helps us remember which theorems, equations, formulas and numbers have the property of Liouvilleté.

Koebe function

extremal univalent function

(Up to linear fractional transformations, it is unique.)

Bieberbach conjecture / de Branges’ theorem

Bieberbach did in fact forward the conjecture, which, for decades, ruined the lives of many talented mathematicians, until de Branges published a proof.

Bieberbach was a nationalist racist, an unrepentant Nazi party member. Had the second world war gone even worse than it did, silly people would probably still call it by Bieberbach’s name. But this is beside the point here.

The theorem relates a simple bound on the coefficients of a function’s power series, to the function having the property of being a univalent mapping of the unit disk. A name suggests itself:

univalent power series theorem

real analysis

The situation with the name of this field is the flip-side of that of complex analysis. The name came as a reaction to the analysis of functions of a complex variable — not as a description of the study, but rather, what the study is not. Very unfortunate. (And presumably, in contrast to that other field, this is the field for real mathematicians.)

The core of this field is the topological notion of measure — it is this that distinguishes it from more basic calculus on the line. It’s reasonable to call the whole field

measure theory

Cantor set / distribution

The distribution is sometimes called the devil’s staircase. The second word is evocative, but the first is just an admonition, in the form of some (presumably imaginary) guy’s name. Not great.

This is a fancy structure, one could say, an invention, and as such, I wouldn’t object to it bearing the inventor’s name. But it is said that neither Cantor nor the devil invented it, and that should disqualify either the honor of appellation. I read that it was invented by a Henry John Stephen Smith, whose name is inappropriate in other ways for naming mathematical objects.

Falling back to a descriptive name, the salient feature is that the set is formed by recursively excluding the middle segment of a division of the segment into three. So:

ternary exclusion set?

or, more Anglo:

third-shut-out set?

differential geometry

Gaussian curvature

intrinsic curvature

(first and second) fundamental form

(first and second) curvature form

theorema egregium

The theorem is “The intrinsic curvature of a surface is invariant under local isometry.”

Gauss wrote in Latin, and expressed well how, for him, the theorem stood out, but … should we call a theorem “bodacious theorem”, if that is how its discoverer expressed their pleasure with it?

intrinsic curvature invariance theorem

osculating (curve, circle, sphere....)

I like the term. That doesn’t shield it from improvement attempts, however. A less foreign choice of words sheds a couple of syllables, at the expense of being a little personal:

kissing (curve, circle, sphere....)

Kronecker delta (tensor, operator, symbol)

Wikipedia admonishes us not to confuse the Kronecker delta with the Kronecker symbol or the (Dirac) delta function. But beyond that, it is used to mean subtly different things in a context of the tensor summation convention, as otherwise, as in a summation of a series.

In the tensor context, is also known by a name suggesting its action:

substitution tensor

I have second thoughts about this… but maybe about the notation itself. In its use in symbolic manipulation, ‘substitution tensor’ describes the form δab quite well, as it does nothing but replace one index symbol with another. But the form δab is an essentially different thing, effectively performing a linear-algebraic transpose on its operand. In this form, it performs a raising or lowering of the index. Like an ‘elevator’...hmmm.

I got used to this at one time, too. To me, these operations seem like essentially different things. But maybe it would be better to use a different symbol for those two things, and to speak instead of raising and lowering tensors, rather than indices.

Einstein notation

Of all the things the guy did, the dweebs decided to name after their hero something he did not invent, and would not have taken credit for. Einstein rolls in his grave every time some twerp refers to this shorthand by his name. Do not do it, if you have any respect whatever for the man or for physics or for yourself.

There are already multiple quite serviceable name for this:

(tensor) summation convention

abstract index convention

Levi-Civita symbol

You see that name only in print: hardly anybody says it. They say ‘epsilon’… but that is a stupid name for an important thing.

A term in common use,

alternating symbol,

seems to me quite adequate, but the terms

permutation symbol, and antisymmetric symbol

are almost as good. (In fact, the last one identifies the thing, up to an overall sign.) They both serve adequately, and suggests something about its structure. Other terms are in common use for it, though:

totally antisymmetric symbol, permutation symbol

The fact that alternating symbol or permutation symbol are adequate, suggests more active options:

alternator

or

permutator.

differential topology

Lie group / algebra

— appears to have been first applied by one of Lie’s students, and Lie is surely responsible for having initiated the study of these objects.

And ‘Lie’ has the advantage of being monosyllabic — otherwise we could call these groups what they are:

differentiable manifold group

Even for me, a 9-fold increase in syllables is excessive. The problem is In the term ‘differentiable manifold’, which is a common and useful enough concept to deserve a less babbly name. Well, maybe

smooth manifold group

That’s now a 4-fold increase, but it says something besides a guy’s name that is often misread in English.

This doesn’t quite work for the Lie algebra, because the algebra is only tangentially associated with the smooth manifold. And there it is:

tangential algebra

Is that confusable with a tangent space? Still, I think it’s a good shot.

variational calculus

Hamiltonian

energy function

Lagrangian

potential function

Euler-Lagrange equations

In physics, these could be called

conditions minimizing potential wrt time-momentum

calculus of distributions

Fourier integral / transform / analysis

Fourier advanced and advocated the use of trigonometric expansions, but other people invented them, and published their discoveries, at least 15 years prior to his fundamental work.

Their primary interpretation provides suggestive terms:

frequency integral / transform / analysis

Laplace transform

— had been investigated by other authors before Laplace.

half-line frequency transform

Plancherel theorem (or Parseval–Plancherel identity)

FIXME: the statement in Rudin's book is broader than what I remembered. Needs more research.

The statement of the theorem, in the context of a properly-normalized frequency transformation, is that the transformation preserves the norm.

So in context, the theorem (or identity, whatever) can be formulated as a statement of the property:

the unitarity of the transformation.

convolution

— a mid-20th century Latinization of the German term ‘Faltung’, by which the concept had been known previously in English literature. The German word of course means ‘folding’, but can also mean ‘pleating’. To me, the integral looks like a

folding

Dirac delta function

It’s not a function – and delta is just a letter. Dirac did use a delta for this thing and called it a delta function, and surely would have been horrified to find it came to be called his delta function. A much better term is:

impulse distribution

(Which is the distributional derivative of the step function distribution.)

Kronecker delta function

discrete impulse function

Heaviside step function

— is about as gratuitous a name-dropping as there is. He didn’t invent this thing, he wasn’t the first to use it. (He did invent a symbol for it, a bold-face numeral 1.) And a quite descriptive term is in common use:

unit step function

Gaussian distribution

— also called Laplace-Gauss distribution by the especially obsequious. But normally it’s called the

normal distribution

Now, that’s not terribly suggestive either, but it does have a connection to the notion of a geometric normal, by way of an inner product. (In fact, Gauss called it that, for that reason.)

differential equations

Helmholtz equation

vector wave equation

D’Alembertian

wave operator

Poisson equation; Laplace’s equation

— both commonly called by the physics concept that they describe:

potential equation

the distinction being made by the qualifier homogeneous. There are also variants in different dimensions.

Kortweg-de Vries equation

first published by Boussinesq.

shallow wave equation

Monge cone

osculating cone

or, shorter and less Latin and maybe nicer (but maybe too suggestive)

kissing cone

The term tangent cone is also acceptable to me. (In Russian, the term соприкасаться (adjoining) is used, but that misses something geometrical.)

Euler’s spiral, Cornu’s spiral, clothoid

The equation of the spiral has arisen in several important physical contexts over time. James Bernoulli was the first to publish a description of a mechanics problem, of which the spiral is a solution. Both Euler and Cornu published early studies of the equations and spiral.

The fanciful term clothoid was coined by Cesàro, referring to the Greek fate Clotho, the spinner, presumably because the curve resembles thread wound around a distaff. That’s pretty far-fetched, besides being a 19th century classicism that doesn’t mean much outside of that culture.

Raph Levien’s Ph.D. thesis The Euler spiral: a mathematical history (2008) goes into a lot of detail.

The introductory paragraphs of this thesis provides an idea.

“The elastica is the shape defined by an initially straight band of thin elastic material (such as spring metal) when placed under load at its endpoints. The Euler spiral is the solution of a sort of inverse problem; the shape of a pre-curved spring, so that when placed under load at one endpoint, it assumes a straight line.”

Now, the elastica describes a shape everybody has seen. The Euler spiral does not—but those who work with it ought to know the elastica, and the relation between the two. They ought to. I did not, because it was introduced to me by an optics book that didn’t mention the relation; I recognized the relation only when I read the above paragraph.

I might have been enlightened if the spiral were called the

inverse elastica spiral

conchoid of Nicomedes

Conchoids are a class of curves whose points intersect a radius from a fixed point a fixed distance along the line from where the line intersects another curve. (Look it up, find a picture.) They get their name because they vaguely resemble a conch shell.

The case in which the other curve of the construction is a line, is the one that has carried Nicomedes’ name. Given the name conchoid, this curve could be called:

line-conchoid

Noether’s theorem

(Note: Noether produced at least two theorems that are known by her name, this one, and one in abstract algebra.)

When the theorem is introduced in texts, the name is always followed by a summary of its topic. Distilled down, the summary would serve as a descriptive name:

conservation-symmetry theorem

(or vice-versa.)

special functions

Bessel functions

Called ‘Zylinderfunktionen’ in German.

cylinder functions

gamma function

The extension of functions on the reals to the complex plane are always called by the same name. Why not just call it

factorial function

topology

separable (topological space)

— a topology with a countable dense subset. What is separated in a separable space??

Banach space

— usually defined as a

normed separable space

Cauchy space

complete space

has always been an alternative. It is suggestive of the existence of a limiting point.

Hilbert space

— usually defined as a

complete inner product space.

Heine-Borel property / theorem

The eponymous mathematicians both did significant work toward the modern statements, but neither initiated nor finished the effort. Several other famous names were involved. Why exactly don’t we list everybody involved every time we mention these statements? No good reason. Why exactly, just these two guys? No good reason.

In the context of sets in a metric space, the property is:

closed and bounded is equivalent to compact

The answer to the often-repeated bemused confession “I forget… what is the Heine-Borel theorem?” is “For subsets of geometric n-spaces, closed and bounded is the same as compact.”

The statement of the property and the theorem in this case is, for those acquainted with topology, very easy. Referring to it by the names of a couple of researchers does not help.

Euler characteristic / Euler number / Euler-Poincaré characteristic

— is about an invariant involving the number of components of polyhedra. Written out, it’s not so bad:

polyhedral parts characteristic / number / invariant

Jordan arc / curve

Descriptive alternatives already exist:

(plane) simple arc

(plane) simple closed arc

separation axioms

— has got to be the lowest level of math nomenclatural hell. It is a travesty, an embarrassment.

Even the term ‘separation axiom’ is screwy (see the short Wikipedia page on “History of the separation axioms”).

As to the naming of the axioms themselves: they first attempted to smear the different concepts with the names of the early researchers, and having run out of names, they started adding judgmental modifiers (regular, normal, completely, perfectly). Being creatures of little imagination, they tried numbering them, again losing some ducks from the row. (I hope they were not surprised.)

I struggled with limited success to memorize all these things, and with less success to understand why each was something I wanted to know.

! $ € ¥ ! Here is an offer: I will buy a beer (or other moderately-priced refreshment) for the first succinct, descriptive name that I receive for each of these ‘axioms’ Furthermore, if you like, I will post your name here as the inventor of the name. If your name is X, we can call the name “X’s name for topological separation axiom Y” in eternal honor of your contribution to science. Runners-up will be named “name N1, name N2...” etc. Unless we get two at the same time, in which case we can name the name “name N1½” etc. Who, me? Worry?

This stinks of work left incomplete and in frightful disarray. It ought to smell ripe for some bright young mind to feed on and grow. I tell you: it is not just a matter of naming — something yet nameless squirms beneath.

algebraic topology

Betti number (of a group)

Here again, they usually don’t bother to capitalize Betti’s name. (Due to their overwhelming respect to the great man, to be sure!)

In the context of finitely generated torsion free commutative groups, it’s the direct product power of the set of integers that equals the group. Consider instead:

power of the integers

or

integers power

of the group.

Klein bottle

There are many kinds of

hyperbottle

and I’m not qualified to classify them, or name them. This one is perhaps the simplest non-orientable four-dimensional hyperbottle.

abstract algebra

commutative, commutativity

This term is quite descriptive — in French, where the verb commuter means to exchange or substitute. (It is attributed to François Servois, who used it in an 1814 memoir.) But it is a rather long, foreign word, for a very commonly used idea. Consider instead:

swappable, swappability

It immediately suggests the activity in question, in English, at a reduced price of syllables.

A weakness is that it suggests somewhat too much. Another possibility would be switchable, switchability, which sounds a little less silly.

A ‘commutator’ would then be rendered a ‘swapper’ (which pleases me very much). And an ‘anticommutator’ would be an ‘antiswapper’. But a commutator doesn’t really exchange or swap two things, it is the difference of the swapped products. The form of the word is a little misleading. Well, ‘swap difference’ has the same number of syllables as ‘commutator’, and is more accurate.

We commonly say “the elements commute” to mean, if the two elements are exchanged (or commuted), the product remains the same. That is, the product is invariant under the operation of commutation. Using swap, it becomes: “the product is invariant under swapping”. But the usage of ‘commute’ could be brought over too, as “the product swaps”.

associative, associativity

William Rowan Hamilton seems to have coined the term ‘associative property’. It’s a fairly descriptive word for the concept, considering the common uses of the verb ‘associate’. The word ‘associativity’ is rather long, though, for such an important concept, and rather Latin.

Is an improvement possible?

The associative property is intrinsically more complex than commutativity, so it’s less likely that there’s a simple term that suggests it in common language. The current term isn’t bad, the notion is somehow about the common notion of association — but really what is in question is the order of association. And the term is again awfully long, considering the frequency of its use.

Most European languages all take the same word. In Greek, the term προσεταιριστική, meaning ‘cooperation’, is used. I guess that’s no worse. In Russian, the term сочетательность, meaning ‘compatibility’, is sometimes used. In Chinese, the term is 结合律, where (in both Chinese and Japanese) 结合 is the verb ‘combine’ or ‘link’. That’s maybe better.

relinkable, relinkability

(Except that the word ‘group’ has already been snatched up, I would propose:

regroupable, regroupability.)

distribute, distributive, distributivity

This term too isn’t bad. Once again, the concept is rather complex, this time relating two operations, multiplication and addition. It is credited to François-Joseph Servois (“Essai sur un nouveau d’exposition des principes du calcul différentiel“, Ann. Math. Pures Appl. 5 (1814), 93–140.)

In German, this is sometimes known as the Verteilungsgesetz, because German has a proper Germanic word for ‘distribute’.

English does have a cognate to teilen: that is ‘deal’. To get the idea right, we would say ‘deal out’, which is very close in meaning to verteilen and ‘distribute’. Unfortunately, there is no verb ‘outdeal’, which makes application of the word clumsy in this context. Try:

deal, dealable, dealability

Then one would say multiplication “deals with” addition, which kind of works, but is just begging for misunderstanding. It’s a pity, but many common usages of the word ‘deal’ rule it out for this tight technical application.

Well, the ‘ver-’ sometimes works like ‘re-’ in English (as an intensifier). My best offering is:

redeal, redealable, redealability

(In Russian, either the transliteration of the Latin word, or the proper Russian word распределить for ‘distribute’, appear.)

Cayley numbers/algebra

— is one of those doubly unfortunate namings. Cayley did discover the eight-dimensional algebra, but only a short time after Graves had discovered them, and named them octaves. (There is rather more to this story, including an embarrassing slip-up by Hamilton, whereby he credited the wrong person in a talk.)

The best name, which indicates their eight-dimensional nature, but avoids a pointless confusion with the established musical term, and which sounds like their parent algebra, the quaternions, is the one that Cayley himself gave them:

octonions

general linear group

Does being vague help, generally? Does it help to toss in a word that is tangentially related to the referent?

How about:

group of invertible matrices,

the phrase that is usually provided to explain this gem of obfuscation.

special linear group

The group of square matrices of determinant 1 could be called “special”, or, just as well, “wonderful”, or “nice”. Such terms are no better (and no worse) than “special”. But there is already a rather better term:

unimodular group

This term is used in a more general context, so, if the matrix context must be specified:

unimodular linear group .

How to explain the symbol, SL(n), though?

special orthogonal group

The term orthogonal group strongly suggests the connection to the geometrical property, but the modifier special suggests only that we should pay more money for it, or that we should take care to say nothing about it that could be taken as a slight.

This group could be called lots of different things, as it arises in different ways. But ‘special orthogonal’ is, by a fair margin, the most useless of them.

There is a history of calling them ‘proper orthogonal’, which is only incrementally better.

Consider that the requirement that the transformations have determinant +1 has an equivalent geometrical formulation: the transformations preserve the sense of an orthogonal system.

How about instead

sense-preserving orthogonal group?

Or, for short,

sense-orthogonal group?

I know, it’s weak... but it has the advantage that the symbol SO(n) needn’t be changed.

In physics, there is a notion of a ‘parity’ operator, which corresponds to orthogonal operators of negative determinant. Perhaps a nice word can be constructed based on that.

parity-preserving orthogonal group?

Maybe better would be to view it as a subgroup of a geometrical group.

unit conformal group?

In 2 and 3 dimensions, it is homomorphic to the group of rotations. For those applications:

rotation group.

However, this group becomes rather complicated in 4 dimensions, to the point where the term ‘rotation’ is just misleading (and is the cause of many misapprehensions and mistakes).

In the plane, it is called circle group and in 3-space it is called the sphere group, so why not

n-sphere group.

It is also homomorphic to the group of square unitary matrices

unitary group,

but this term is already technical jargon.

I’m going to go out on a limb here. To my mind, the very things that we expect of transformations that preserve solid figures are: that the angles of figures aren’t changed, lengths aren’t changed, and the sense of a traversal of vertices isn’t changed. How about:

solid group.

Then you could tell your students that the “SO” in SO(n) stands for “SOlid group”, and avoid breaking the news that mathematicians sometimes drop the ball.

Jacobi identity

— is a sort of weakening of the commutativity property or of the associative property of multiplication, in the presence of addition. Maybe something like:

half-link identity

It is also a statement about the sum of three commutators. That suggests:

triple-commutator identity

It is also as a statement about the relation of products of three shift permutations of three elements, thus:

triple-shift identity

group

— is completely non-suggestive of an algebraic structure, although it has all the provenance one could hope for.

Galois used the term groupe in his final writings, but never defined the term, as though he were using it in a colloquial sense.

It would be OK with some sort of qualification, say, an ‘operation group’. On the other hand, taken in context, it does have the advantage of being monosyllabic, and suggesting an association, with almost no other connotations.

Cauchy used the term “conjugate system of substitutions”, which is worse in every way.

“The first person to try to give an abstract definition of a group was Cayley.” (1878)

“… from 1863 when Jordan wrote a commentary on Galois’ work in which he used ‘group’, it became the standard term.”

See: The abstract group concept by J. J. O’Connor and E. F. Robertson.

Maybe algebraic structures are hard to name in suggestive ways, because daily experience doesn’t involve them. I have no better term for groups, except that outside of a group theory context, to recommend that they be called ‘algebraic groups’.

semigroup

A semigroup is not a “half group” or “partial group”, as its name suggests. It is backwards, anyway, to define a mathematical concept by removing a feature of another concept. (E.g. removing the features of identity and inverses from the concept of group.)

But what to call it? As I pointed out, the term “group” seems to have arisen from its colloquial use, and has the advantage of being free from other connotations. Perhaps another monosyllable:

bunch.

magma

This term is excellent, as it suggests the formlessness of non-associativity. Its colloquial isn’t confusable with anything mathematical.

There was a controversy about the name of this structure. For a while, it was called a “groupoid”, which has the same problems as “semigroup”. Furthermore, that term had come into use in a different way in category theory.

quasigroup

A quasigroup is not a “seeming-group”. This has failings similar to those of “semigroup”.

But what to call a non-associative algebra with a division operation?

Maybe, following the term “magma”:

scree?

ring

The abstract algebra use was coined by Hilbert, as ‘Zahlring’, meaning ‘number ring’, with ‘ring’ supposedly in the sense of a generic association, as a spy ring.

Well, from the beginning, I imagined a finger ornament, or something of that shape. And then there are rings of integers modulo n, which do somehow come back on themselves.

The ternary use of ‘ring’ for a generic association kind of works in English. In French, Spanish, Italian, Portuguese, the term has been translated as anneau, anillo, anello, anel, (resp., etc.) which don’t suggest an association at all, but only the object or its shape. Likewise the Russian and Greek translations кольцо and δακτύλιος.

Again, I’m at a loss for a better word. As with ‘group’, if your audience includes people not studying rings and fields, say “algebraic ring”.

ideal

Oh, good, another abstract noun that looks exactly like a commonly-used adjective. As a name for a useful class of things, it is less than ideal. I wish somebody would do something about it. I personally have no idea.

It started as just an adjective in the term ‘ideal number’, a name given by Kummer around 1847. Even there, the adjective is rather dodgy, but there is a sense to it. He was evoking Plato’s notion of ‘ideal’ as a true form, somehow the essence of a thing, which Plato regarded as being somehow realer than mundane reality. The term was later generalized for algebraic rings by Dedekind, who coined the noun use of the word in this context.

The mathematical term has the sense of a subset that somehow captures an essence of the whole set.

field

The abstract algebraic use is at odds with the vector calculus use of a ‘vector field’. For me, the vector calculus use evokes a field of grass, suggesting little vectors. How the abstract algebraic concept relates to the word escapes me.

Dedekind introduced the abstract algebra concept with the term ‘Körper’. I don’t know why this didn’t make it into English, (maybe English bashfulness about the body). The term ‘field’ was introduced in the English literature by Moore (1893). (In French it is still called ‘corps’, although that term is giving way to the English transliteration ‘champ’.)

Provided the word is qualified, perhaps with ‘abstract’, or used in the context of abstract algebra, the ambiguity isn’t important.

Besides the conflict, it isn’t suggestive of anything whatever.

A ‘field’, in the algebraic sense, is a generalization of familiar arithmetic. How about:

(an) arithmetic

Abelian group

They usually don’t even bother to capitalize poor Abel’s name. I have known students who weren’t aware that it was a guy’s name, let alone knowledgeable at all about the guy. If you were to tell them that an ‘abelian’ group means a group that isn’t ‘belian’ (surely a thing to avoid!), they would dutifully teach that to their own students when they ascend to high-paid professorships.

The reason for the state of affairs is purely historical. Abel assumed commutativity in groups he studied, before the term commutativity was invented (by Servois, 1814. See the Wikipedia article on the commutative property.)

Of course this is commonly referred to by the explicit term:

commutative group.

normal subgroup

applauds the subgroup for not being abnormal. The alternative term

invariant subgroup

is commonly used, and comes closer to saying something useful about the subgroup... but it leaves open the question of how subgroup is invariant.

Another term,

self-conjugate subgroup

might be useful for those who know what “conjugate” means in the context.

I’m not satisfied with any of these.

If it’s not wrong to borrow the term “similarity transformation” from geometry, maybe:

similarity-invariant subgroup

Lagrange’s theorem

The primary theorem about the relation of the order of a finite group and those of its subgroups: the latter divide the former. It’s so simple, a good name should almost tell the story.

subgroup order division theorem

Outside of a finite group context, simply add the qualification (finite group).

Cauchy’s theorem

The flip-side of the order division theorem should reflect that it is the flip-side:

(finite group) order factor theorem

Cayley’s theorem

This absolutely central theorem of group theory deserves an explicit name.

(group) permutation theorem

where, in a group theory context, just drop the (group).

(Besides, I always struggled with the problem of how many y’s appear in Cayley’s name.)

Jordan-Hölder theorem

composition series theorem

fundamental theorem of finitely generated Abelian groups

The ‘fundamental’ qualification is very questionable — it really isn’t a qualification at all, it’s an exhortation. It’s like saying “very important”. Like putting stars and pointing fingers and flashy lights around it. It’s a little silly. With little more effort, the term could describe the thing.

decomposition theorem of finitely generated commutative groups

Now the attentive student will know not only that the theorem is important, but why it is important.

In a context of finitely generated commutative groups, dropp all that, of course. And in a context of finitely generated groups, just drop all but the ‘commutative’. And so on.

Sylow theorems

This family of theorems regards the decomposition of general (non-commutative) finite groups.

It is a very short name for generalizations of the theorem that goes by the very long but questionable name “fundamental theorem of finitely generated Abelian groups”. Rather than having to explain what the term refers to after introducing the name, it would be be better to have a more explicit name. My best offering is:

(finite group) decomposition theorems

To distinguish these from the theorem about commutative groups, just affix an adjective ‘non-commutative’.

Boolean algebra/logic

binary algebra/logic

logic

De Morgan’s laws

As though a guy invented such a basic rule of discourse. But what would you call them?

I have been caught calling them the

rules of and-or-not

Of course, the question of top billing is a problem. Take your pick, I guess.

Gödel’s incompleteness theorem

I don’t know of any other incompleteness theorems (although two separate versions of the theorem in Gödel’s papers go by this name). Why not just

the incompleteness theorem

Zorn’s lemma (or Kuratowski-Zorn lemma)

Zorn made a classic mistake, tacitly assuming the truth of a subtle principle, which hadn’t formerly been recognized as such. And he wasn’t the first to do it for this particular statement.

The principle is now well named:

axiom of choice

(It was Zorn himself who first proposed that the lemma he thought he had proved was in fact equivalent to the axiom of choice.) This form of the axiom regards the existence of a maximal element of chains in partially ordered sets, so within that context, it can be called simply

maximal element principle,

and outside, for context, append “of chains in partially ordered sets”.

set theory

Venn diagram

— called a ‘Mengendiagramm’ in German; in French, ‘diagramme logique’. But in Italian, it’s also called ‘diagramma di Euler-Venn’. (Good grief. To how many great, great men must we do homage, in expressing such a simple thing?)

What is wrong with

set diagram

Is it confusable with something else important?

Cartesian product

Descartes discussed an analysis of the plane into ordered pairs of numbers, thus analytic geometry. To go the other way, to construct the geometric plane from ordered pairs of numbers, is a synthesis. A better term for the construction of a set from two factor sets, as ordered pairs of elements from the factor sets, would be

synthetic product.

Cantor diagonalization argument

— is often just called a

diagonalization argument

or even just

diagonal argument

But ‘diagonal’ is a pretty common word in math, used in several ways already. If there is danger of any confusion, one could insert the subject of the argument, perhaps something like

cardinality diagonal argument

countable set

— not a terribly bad term, but it has weaknesses. Different authors use the term differently: some use it to exclude finite sets, others don’t. Worse, at first sight, it suggests that a person could count the members of the set. In common conversation, it could mean “a set whose members could be practically listed,” thereby excluding infinite sets.

denumerable set

has the advantages that it suggests a relation to numbers, and that it wouldn’t arise in normal conversation to mean nearly the opposite of what is intended. But it’s five syllables as opposed to three.

Zermelo-Fraenkel set theory

When set theory was being developed, there were some competing notions and controversies. But the basic question of a useful theory of sets for mathematics has long since settled down. The exact axioms are rather a matter of consensus, but this one set has obtained wide acceptance… but it is wrong to think that they are the only set of axioms possible.

The ideas are so basic, I can’t imagine a simple descriptive term suggestive of their nature.

So an adequate term is the one already in common use:

standard set theory

The term ‘standard’ should always refer to a fixed, open standard, as opposed to a consensus opinion. So an international standard ought to be published to state clearly what ‘standard set theory’ means.

probability

Bayes’ theorem / rule / law

conditional probability theorem / rule / law

Markov chain / property

Another, more suggestive name is often used.

memoryless chain / property

functional analysis

This common name for the field has several weaknesses. One is the strange use of the adjective functional. See below. Another is that the primary focus of the field isn’t really about functionals at all. Yet another is that, although the major applications of the theory is in various branches of analysis (and differential equations), the modern theory is essentially a merger of linear algebra and topology, not an analysis in the sense of analytical geometry.

Better terms for the field include:

infinite-dimensional linear algebra,

linear operator theory.

functional

The word looks and sounds like nothing but an adjective in common use… yet here it is used as a noun. It seems out of place and weird.

Near as I can tell, Hadamard coined the noun form fonctionnelle in his 1910 book Leçons sur le calcul des variations. In French, the word is a normal adjective for a feminine object, but it also happens to be of the form of a diminutive noun, and he uses it thus. The English transliteration from the French loses the diminutive impression, and at first sight looks like a grammatical mistake.

If it were only less anglicized, it wouldn’t look so much like an adjective. Consider:

functionelle

which has the clear connection to ‘function’, doesn’t look like an adjective, looks like other French noun loan-words, and has a diminutive flavor.

Yeah. That’s what went awry. The translators over-anglicized it. Oops! (Or maybe they wanted it to sound more beefy and ignorant. Could be — it was Americans what done it.)

Riesz representation theorem

In the context of continuous functions,

integral representation theorem

generalized function

The concept of function was repeatedly generalized from its inception, for 300 years. Which generalization does this term refer to? Generally, ‘generalized’ is a miserably vague adjective for a concept. (It ‘vagueizes’ the concept.)

When it appears, the term evidently refers to the functional-analytic notion which was named by Laurent Schwartz

distribution.

graph theory

Hamiltonian path / cycle

Hamilton’s name is hung on this because he was a famous guy who happened to study a single special case of such a path. He was not the first.

There are already more descriptive terms, with fewer syllables:

simple spanning path / cycle

traceable path / cycle

Perhaps overly literal:

path / cycle through each vertex

For brevity, a common word suffices for such a cycle:

tour

Eulerian path / cycle

Each edge of the graph is traversed just once. Maybe:

traversal / closed traversal

numerical analysis, optimization theory

Nelder-Mead method

I had never heard it called that until recently when I looked for it in Wikipedia, and found that they had applied their perverse naming scheme to this, too. They list there several other, much better, names for the same method.

I had always heard it called

downhill simplex method

This suggests nicely the way the method works, but suffers from confusion with the ‘simplex method’ of linear programming.

The article lists two other good names for the same thing:

polytope method, and

amoeba method.

If you look an animation of this algorithm in progress, surely you’ll agree, the last name is the right one.

Toeplitz matrix

— arise from discretization of convolution operators. How about

convolution matrix.

REFERENCES

MacTutor History of Mathematics Archive
University of St. Andrews School of Mathematics and Statistics

Steven Schwartzman,
An Etymological Dictionary of Mathematical Terms Used in English,
The Mathematical Association of America (1996) ISBN 0883855119

Anthony Lo Bello,
A Comprehensive Dictionary of Latin, Greek, and Arabic Roots,
Johns Hopkins University Press (2013) ISBN 9781421410999

more to work on

Use of "unit" in field/ring theory (for an invertible element), vs its use 
everywhere else in math, for the number 1.
Sometimes the latter is called "unity". 
(unit circle, unit sphere, unit vector, unit interval, unit set... etc. )
(note: in complex analysis, i is the "imaginary unit", and there are
"roots of unity")
How did this happen?
Ah.  In German, the field/ring concept is called "Einheit".  Maybe the 
concepts were confused in translation.
Dictionary.com says of "unit": 1570; coined by John Dee as a translation
of Greek mónas (previously rendered as unity); perhaps influenced by digit.

Menelaus’ theorem
Pappus’ (centroid) theorem, Guldinus theorem, Pappus-Guldinus theorem
Pappus’ (hexagon) theorem
Pappus’ (area) theorem ( a generalization of Pythagorean theorem)
Hero’s formula

sieve of Eratosthenes

Riemannian curvature tensor / Riemann-Christoffel tensor
Bianchi identities

beta function
zeta function (integer-powers-sum?  prime-powers-product?)
There’s also a ‘Dirichlet’s beta’

Möbius function	(number theory)
Hankel functions
Mathieu function
Lamé functions
Airy functions
Struve function
Weierstrass function
Euler \phi-function (given a counting number, the number of counting numbers
	at most that number, and relatively prime to it) 


Painlevé property
Painlevé transcendents

Bernoulli trial
standard deviation -- means something function-theoretically
	square root of the variance

Poisson distribution
	number of equally probable, independent events in an interval
Bernoulli distribution
	coin toss distribution
chi-squared distribution
	inference ?
Student’s t-distribution ("Student" was really "Gosset") who called it
	"frequency distribution of standard deviations of samples drawn from a normal population"
gamma distribution
	maximum entropy ?

Bell polynomials
Hermite polynomials
	Hermite did not invent these, nor was he the first to publish
	extensively on them.  (Those are Laplace and Chebyshev.)
	They are orthogonal wrt the normal distribution.
Jacobi polynomial
Kirchoff polynomial
Laguerre polynomials
	orthonormal on half-line wrt. exp(−x)
Chebyshev polynomials (also there are ‘discrete’ ones)
	2 different kinds.  either is a basis.
	he published works using the polys.
	Various useful properties might suffice for a name.  Which?
Legendre polynomial
Schur polynomials

Bernoulli Numbers
	1) they popped up in other people’s discussions around the same
	time, some had been tabulated before.
	2) I don’t know a really nice description of them.
	3) They are related to several other sequences.
	Maybe it would be better to start with the most perspicuous sequence,
	and write these (and other derivative sequences) as variants.

Mittag-Leffler’s theorem

De Moivre-Laplace theorem
	special case of the central limit theorem

Jordan canonical form (Jordan normal form) / block
	(a block upper-triangular mx similar to a matrix)
	(upper-triangular eigenspace form)
Jordan decomposition (is used to mean several rather different things)
Frobenius normal form (aka rational canonical form)
	(a block diagonal of Frobenius matrices, similar to given matrix)
	see https://encyclopediaofmath.org/wiki/Frobenius_matrix
	(eigencoefficients form or decomposition or canonical form)

Smith normal form
	(pivoting form)

normal matrix (exhortation.  what do they preserve?)
positive definite matrix / operator (what does the "definite" impart?)

Frobenius matrix (aka companion matrix)
	(one off-diagonal all 1, one row or column coeffs of char. poly)
	(aka. transpose: Gauss transformation "") (row elimination matrix?)

Hadamard matrix (first constructed by Sylvester; cmp Walsh matrix)
	(entries are ±1, rows are orth.)
	(used in combinatorics, coding)
Hadamard conjecture: Hadamard mx of order 4k exists for each pos. int. k .
Walsh matrix (special case of Hadamard mx)
	(order 2^n, entries are ±1, rows are orth.)
	

Sylvester matrix
	(entries are permuted coeffs of two polynomials,
	used to test if the polys have a factor in common)
	
Stieltjes matrix
Cauchy matrix

Hankel matrix (aka catalecticant matrix)
	(antidiagonals are all constant; contrast with Toeplitz matrix)

Hessenberg matrix (or form, "upper Hessenberg form")
	(triangular except for one off-diagonal)

stable matrix
	(every eig has strictly negative real part)
	linear differential equations
Hurwitz matrix
	(matrix composed of certain permutations of coeffs of polys)
	related to stable matrix.

Mercer’s theorem (any Hermitian mx is pos semidef iff it is the mx of inner products of a set of vectors.  "vector realization")

multiple matrix terms:
A^T, etc
	conjugate (complex numbers)
	transpose (real only)
	adjoint (complex matrices)
	Hermitian adjoint
	Hermitian conjugate

A^T A = I
	orthogonal (matrix)
	unitary

A^T = A
	symmetric 
	Hermitian (complex case)
	self-adjoint (not the same def in infinite-dim spaces)

----------------------------
Noether’s theorem (one in algebra, one in differential equations)

Sobolev space
Hölder space

Banach fixed-point theorem
Brower fixed-point theorem
Borsuk fixed-point theorem
Lefschetz fixed-point theorem

Čech homology
Stone-Čech compactification

Riemann sphere, surface
Cauchy principle value
Faà di Bruno’s formula
Fresnel integral
Liouville’s theorem (in differential algebra)
Liouville’s theorem (conformal mappings in Rn)
Liouville-Arnold theorem
Liouville’s differential equation
Hadamard’s formula, theorem
Hadamard’s three-circle theorem
Harnack’s inequality
Cauchy’s inequality (about power series)
Lagrange’s identity (at least three different things are called this)
Lagrange’s theorem (one in group theory, another in number theory, and others)
Jensen’s formula
Lucas’ theorem
Picard’s (little/great) theorem
Poisson formula
Rouché’s theorem
Schwarz’ lemma
Schwarz-Christoffel formula
Schwarz triangle function
Stirling’s formula
Weierstrass’ (factorization) theorem
Gaussian integral aka Euler–Poisson integral (idea by de Moivre)
Lyapunov surface

Mandelbrot set
Julia set

Abel’s limit theorem
Abel’s power series theorem
Abel’s differential equation (or identity/formula)
Abel’s theorem about irreducible polynomials (Abel-Ruffini)

Goursat's lemma (poor man only had one lemma!?)
	one is about subgroups of the direct product of two groups. 

Galois is a particularly sad victim of this sycophantic drooling.
https://en.wikipedia.org/wiki/List_of_things_named_after_%C3%89variste_Galois

Montel’s theorem (really two of them)

Lebesgue measure / integral
Haar measure
Baire measure
Carathéodory outer measure
Hausdorff measure
sigma-algebra, σ-algebra
	(read distinction from set algebra is the generalization to 
	countable intersections.  Why not "measure algebra"?)


Baire category theorem
Borel equivalence
Borel field
Borel set
Fatou’s lemma
Fubini’s theorem
Hahn-Banach theorem
Hölder inequality (about prod of Lp norms.  First published by Rogers)
Jensen inequality (about error ftn of secant line)
Lindelöf theorem
Radon-Nikodym theorem

Daniell integral

Lipschitz continuity

Hahn decomposition theorem

Liouvile’s formula (Abel-Jacobi-Liouville identity)

Laplacian (nabla, divergence of gradient, diffusion)

Dirichlet boundary conditions / problem
	function value boundary condition
	aka "fixed boundary condition", "boundary condition of the first type"
Neumann boundary conditions (or problem)
	function normal boundary condition
Robin boundary condition
	normal derivs are linearly combined with ftn values
	aka "Fourier-type" or "radiation" condition
Mixed boundary condition
	normal derivs are prescribed independently of ftn vals
Cauchy boundary conditions (or Cauchy problem)
	initial-value data
	ftn and derivs wrt t prescribed at 0
	Sturm-Liouville problem
		self-adjoint?

Cauchy-Euler (Euler-Cauchy, or Euler’s) equation

Radon problem, transformation
Hadamard’s method of descent

1st-order systems of differential equations
	Navier-Stokes equations
	Harnack’s inequality
	Holmgren’s theorem
	Huygen’s principle

2nd-order linear ODEs
	Bessel equation
		(introduced by D. Bernoulli, generalized by Bessel)
	
	Legendre equation
	Legendre polynomials via D.E.

	Beltrami equation
	Tricomi equation
	Darboux equation

	Dirichlet principle
	Duhamel’s principle

	Gårding’s inequality (also strong inequality)
	Gårding’s hyperbolicity condition

Korteweg - de Vries equation

Bernoulli differential equation 
	Jacob Bernoulli, proposed as problem December 1695,
	Acta Eruditorum,
	Solved by Leibniz, Acta Eruditorum (16)
	Euler wrote several papers (1752-3, -5) settling the current form.
	viscosity-free incompressible fluid, energy-pressure equation?

Hopf bifurcation

Cauchy-Kovalevsky (or Cauchy-Kovalevskaya) theorem

Lipschitz integers
Hurwitz integers

Riccati theorem
Poisson brackets
Pfaffian form

Frobenius’ theorem: there are at least 2
	(existence soln involutive systems;
	finite-dimensional real associative division alg is R C or H )
Hurwitz’s theorem:
	{finite-dimensional real normed division alg is R C H, or O.)
Wedderburn's little theorem
	(every finite domain is a field. statement 1 syllable longer than name) 

domain: different uses in algebra and function theory

Hamel basis
Schauder basis

Riemann invariant
Darboux frame
Frenet frame
Gauss-Bonnet formula
Poincaré formula

Dirichlet operator
Fredholm operator

Bessel’s inequality (see Parseval)
Riesz’ lemma

Brooks’ theorem
Bollobás-Erdős theorem
Dirac’s theorem
Menger’s theorem
Petersen’s theorem
Ramsey theory/theorems
Fano plane
Kirchhoff’s theorem
Turán’s theorem
Turán problem/polynomial/graph

Gaussian quadrature

Galerkin method
Godonov method
Crank-Nicolson method
Lanczos method
Rayleigh-Ritz method

Crout decomposition (of mx)
Doolittle decomposition
	(Crout n Doolittle differ in whether main upper or lower diag. is I)
Cholesky decomposition (of Hermitian, p-d mx)
Householder transformation / reduction
Krylov sequence
Lanczos tridiagonalization
Schur form / reduction / decomposition  (also Real Shur decomposition)
Durbin’s algorithm
Givens rotation / orthonormalization
Gauss-Seidel iteration (Liebmann method aka method of successive displacement)
Jacobi method

Minkowski inequality
Neumann series (geometric series of matrices)
Gerschgorin theorem
Rayleigh quotient

complex / simplex 
simplex: history
	used in ‘downhill simplex method’ and the ‘simplex method’
	of linear programming, but doesn’t refer to the same thing

complex: also called simplicial complex
	dunno.  The usual noun use is suggestive of the math.

fundamental group (of complex plane)
	loop group?  (there is also a "loop group".  are they different?)
	loop homotopy group?
---------------------------
Euler’s formula (for vertices, edges, faces of polyhedra)
Euler circuit
Euler trail
---------------------------
Different names for same thing:

antisymmetric, skew, skew-symmetric, and alternating matrix.
	(1945 paper by Murnaghan for latter)

in mid-1800s (before matrix algebra) the distinction between the term
'determinant' and 'matrix' was seldom made.

---------------------------

Gauss’s Law
	(divergence theorem applied to electric charge.
	formulated by Lagrange 40 years earlier)

Bézier curve
	at least part of the idea is due to Paul de Casteljau.
	It’s a kind of cubic spline.  What sets it apart?

Burnside’s formula/lemma
	(a case where the person the thing is named after objected to it...
	explaining someone else (Frobenius, here) is really the originator.)

	The lemma is about finite groups, and states that the number of orbits
	in a finite subset X of the group times the size of the group is the
	sum of the sizes X_g = { x in X | g.x = x}?  what is that called?

Clifford algebra
	in his "Applications of Grassman's Extensive Algebra", Clifford
	speaks of "algebras of n units". 
	Is this different from "Clifford algebras"?
	Clifford also refers to "geometric algebras".
	But this is misleading, too...
	These are associative algebras generated by copies of the reals and
	a finite-dimensional real quadratic space.
	real quadratic space: finite-dim linear space with a quadratic form.
	e.g. dot product, inner product, or bilinear form
	(OK why are the octonions not an example?)
--------------------------
Hamiltonian group

normal subgroup
	aka invariant subgroup, self-conjugate subgroup
	(is there any connection to perpendicularity?)
subnormal subgroup
quasinormal subgroup
pronormal subgroup
paranormal subgroup
abnormal subgroup
malnormal subgroup

direct product
	for commutative groups, is called direct sum
	compare Cartesian product
Cartesian product
	ordered pair, first taken from one set, second from another.
	How about synthetic product... or just synthesis.

--------------------------
K-theory
	I was told once that I was intellectually incapable of understanding
	this.  And, yikes --- they may be right.  But it's still a stupid name.

q-calculus, h-calculus
	unsuggestive names -- I suspect these are just notational conveniences.
--------------------------

pseudometric
pseudovector (or axial vector) vs true vector (or polar vector)

Lorentz transformation / boost
	relativistic transformation / boost
	(but there is special relativity and Galilean relativity...)
	orthogonal transformation --- because it preserves space-time orthog.
	(C. Lanczos: The Variational Principles of Mechanics
	"The transformation ... is a special case of a wider group of
	transformations which has received (with little historical
	justification) the name ‘Lorentz transformations’." )
Galilean transformation
	affine transformation
special relativity
	flat relativity
general relativity
	gravitational relativity
Minkowski space
	relativistic space
Lorenzian manifold
	metric manifold?