Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My first degree was in math. I've frequently discovered that I don't understand the notation in math papers. What does the paper mean by the harpoon (↼ arrow with half its arrowhead) and how is it different than the waved arrow (⬿ arrow with a wavy shaft)? Each discipline has its own conventions, is the backslash a set difference operator or is it matrix division. Is alpha, α, an angle in some trig equation or is it the minimum assured score value in alpha-beta pruning? It takes a bit of time to dive into a math paper. Even the universally understood symbol for integration (∫) can mean different things. Is it Riemann integration or is it Lebesgue integration? Is it symbolic or is it a numerical approximation. It depends upon context, and the context is communicated between mathematicians by the subject of the course, the preceding results in or paper, or just a few asides by a professor giving a lecture.

Computer scientists (I've been one for roughly 50 years) introduce their own academic notations. Is circled plus, ⊕, a boolean exclusive-or or is it bitwise exclusive-or. Take a look at Knuth Vol 4A, it's chock full of mathematical notations embedded in algorithms. He uses superscripts that are themselves superscripts, how are those supposed to be entered with our text editors?. What about sequence notations like 1, 4, 9, 16, ... we might suppose that it is just the integer squares, but the On-line Encyclopedia of Integer Sequences lists 187 other possibilities. Is the compiler supposed to guess what this is?

Well, if mathematicians use these concise notations, why shouldn't programmers? I believe it is because mathematicians don't want or need to take the time and space needed to spell out these operators, variables, and functions in their papers. It's not necessary for mathematicians. Other specialists in their field can figure out what the symbols mean while reading their papers. Their students can understand what a blackboard capital F (𝔽), likely a field in a class on abstract algebra.

Programmers are doing something different. Their programs are going to be a lot longer than most math papers or lecture expositions. The programs have to deal with data, networks, business rules, hardware limits, etc. And everything in a program must be unambiguous and precise. Programs are large and can be edited many times by many people. For these reasons, I'm inclined to favor programing in plain language with plain ascii.

See:

Knuth, The Art of Computer Programming Vol 4A, Combinatorial Algorithms

The on-line encyclopedia of integer sequences, https://oeis.org



My background is also in maths and I am keenly interested in, and frustrated by, notation as it appears in various fields. There's a time and place for fancy, dense notation, and I don't think it's here. Subjectively I found the use of unusual math unicode symbols to be gratuitous in this repo.


> Well, if mathematicians use these concise notations, why shouldn't programmers? I believe it is because mathematicians don't want or need to take the time and space needed to spell out these operators, variables, and functions in their papers.

Mathematicians don't need to fight compilers, only other mathematicians. If Mathematicians needed to convince a compiler their equations, I'm sure they'd be forced to be fully explanatory and less handwavey at the margins.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: