It's a shame. I had high hopes at the beginning that S4TF - and the investment in Swift from Google - would help Swift break out of the iOS ghetto and cement it as a mainstream language.
Swift's a delightful language to use. It has a lot of the nice things about Rust's type system, but is a heck of a lot easier to use at the expense of a bit of performance. For a lot of use cases, I think this is a great value proposition. ML/data science is a great example where practitioners could benefit from a more grown-up type system than what Python has on offer, but would still be able to keep low level details at arms length.
I think Serverless would be another ideal use-case for Swift, where the productivity, clarity and correctness tools it offers would be a huge benefit.
Some very interesting things came out of the S4TF project - like the work on autodiff, and python interop. It's a shame the project never really seemed to get legs, it seems like it just languished for years with no clear direction.
These days I do most of my work in Rust, and I'm happy to do so because the tooling, community, and ecosystem is really amazing. But there are a lot of language concepts and features I miss from swift. I guess it goes to show that the governance and availability of a language have a lot more to do with adoption than the merits of the language itself.
I love all languages in the ML/Haskell tradition, but I think Julia would have been a better fit because it's dynamic yet efficient, and because it has a really decent probability/statistics/ML ecosystem already. Long term, I think it's the best replacement we have for Python in the ML world.
Python has exceptional libraries but, as a language, it's a bit dated on several fronts. This has an impact on library design. In Python, ML libraries are huge monoliths and they depend on a lot of code written in other languages. They are really hard to understand or modify.
In Julia, things are really small and composable. For example, you have a probabilistic programming library like Turing and a differentiable programming one like Flux, and it's trivial to implement some Bayesian neural networks. Same applies to many other things. It's small and beautiful, but it needs more manpower to compete against Python.
> In Julia, things are really small and composable. For example, you have a probabilistic programming library like Turing and a differentiable programming one like Flux, and it's trivial to implement some Bayesian neural networks.
I was taken aback when looking at Turing for Bayesian modelling that the distributions were just the standard distributions found in the Distributions package! In Python, every Bayesian framework has its own implementation of everything from distributions to log probabilities, but it all composes in Julia.
> It's small and beautiful, but it needs more manpower to compete against Python.
Agreed. Docs of major projects are still incomplete or non-existent, there's a lot of projects that have been nigh abandoned (cough naive bayes cough), and the composability of Julia coupled without strong leadership leads to an ecosystem of overlapping functionality. Still, I hope by leveraging PyCall, Julia can overcome some of these issues.
You mean that, in Julia, we the users have to "compose" our own implementations of models (e.g. log probabilities), as opposed to using the already-made ones in Python?
More like... most of the scientific Python stack has settled on a third party library, NumPy, to provide support for numerical computing. That includes providing data types like arrays. Tensorflow doesn't use NumPy's data types, so if you want to use Tensorflow Probability, you need to convert everything you've done in NumPy (or things that are built on top of NumPy, like Pandas) to TensorFlow, and use TensorFlow's functions/methods to manipulate them.
(The same is true if you use something that's built on top of Stan or JAGS or BUGS or something else.)
In Julia's Turing.jl, everything is built around data structures that are first-class parts of Julia, so there's no need to have special Turing.jl versions of, say, probability distributions.
yes, and if you go to that link, you can see its experimental, for TF 2.4 _and_ the notebook on github was created Aug 3, 2020. So that is relatively new and most likely has a ways to go before further adoption. Its good that option exists, but doesn't seem very backwards compatible. That doesn't negate the points made in the parent.
It will mature before the Julia ecosystem mature so the value proposition of Julia is out weighted by it's lack of ecosystem.
Had they transparant interoperability with Python like Kotlin is for Java, I would consider it.
They could at least rewrite their vm to use the truffle framework from graalvm so that they would become interoperable with Python on graal which in a few years will mature.
If they weren't already-made in Julia yes (and a library didn't do the work of composing it and presenting as a monolith, since pretty much all large frameworks like Flux and DiffEq are also made this way). Although you're maybe trying to imply that everything that anyone needs is already made in python, I occasionally end up finding small issues between my use case and what's immediately available and I have to twist my problem into a stitch of "kinda what I need" pieces (like having to change my problem into a vectorization problem, a long chain of corrections using pandas to fit an interface and then some weird logic to transform the table into a CSV in memory to use postgres' COPY or UPSERT because pandas to_sql is not flexible enough and iterating and inserting is too slow). Sure I always achieve the result using already-made stuff, but the glue logic ends up taking more time and honestly ends up quite hard to maintain for other people (or even myself after a few months) after all it isn't my algorithm logic but some adhoc puzzle I just solved.
In Julia, if the DataFrame library is missing something I can just loop like an array and have a method that works just as well as if the library provided it, the CSV, DB and table processing libraries all use the same conventions so if library "A" solves my issue I'm not forced to use the same library "A" serialization method. Basically instead of twisting the logic to what I'm given, I just fill in the blanks. Sure Julia has way more blanks, but Julia also has half of the age of some of the Python's library I use, it's more about the maturity of the community than anything to do with the design choices of the language and I can only hope it gets better and better with time.
Julia gets compared a lot to Python but truthfully it strikes me as more like a replacement for FORTRAN, MATLAB, and R. The ergonomics of the language seem good but documentation is poor and as a community it's narrowly scoped into the ML/numerical processing world.
If all you do is ML/numerical processing, against data that's already been cleaned up, I bet it's really great tho.
The comparison is natural because Python is a general purpose language which happens to be used for data science, the same can be said about Julia. The others languages mentioned are not really that great for general purpose programming.
But I agree with you that in the short term it is a replacement for Fortran, Matlab and R. But a new language has to start somewhere. Attacking the niche filled by these language is natural as these are old, dated and ripe for disruption. The key is to gain some critical momentum and user base. That will carry the language to other domains.
> Python has exceptional libraries but, as a language, it's a bit dated on several fronts.
I've been using Python for ML for the last 3 years and I've never felt this way. It might be that I'm not all about the hip new languages, but I don't really see the benefit of making Python more ML/Haskell-ish.
The ML use case for Python is roughly as follows: you design, train, and evaluate models. Then, if something is decent enough to use in production, you switch over your application to load and use that instead. I don't really see where Haskell or any language from the ML family can improve in that process.
Sure, the code that you used to implement your model may improve slightly, but I don't see that code improving significantly. The fruit of your labor is usually a protobuf file (encoding the TensorFlow graph) or whatever your framework uses to encode the model you built. The code actually surrounding it is very minimal for most use cases.
> In Julia, things are really small and composable. For example, you have a probabilistic programming library like Turing and a differentiable programming one like Flux, and it's trivial to implement some Bayesian neural networks.
There's nothing stopping you from composing things in Python. But it's simply not a goal of 99% of ML libraries to be composable with other libraries. You're probably never gonna run TensorFlow and PyTorch in the same application (trust me, I've tried, it was a nightmare) and I don't see why you would compose a TensorFlow model with a PyTorch model without incurring tons of overhead in your application around gluing these two things.
> There's nothing stopping you from composing things in Python.
There is, and you pointed it out yourself:
> I don't see why you would compose a TensorFlow model with a PyTorch model without incurring tons of overhead in your application around gluing these two things.
This is where Julia is profoundly different. Reusing stuff from different libraries is trivial compared to Python. You could take an activation function made for one libraries use it without any modification or wrapping in another ML library.
These is where Julia will win long term. If you look at the ML libraries in Julia they are tiny. The Python side of things require gargantuan effort because things are not composable. The wheel is reinvented over and over again.
>there's nothing stopping you from composing things in Python.
In Julia you can compose a custom distribution, with a bayesian model with an ODE with a neural network with unit number types with custom Julia written CUDA kernels and multithreading.
Edit: That are not designed specifically to work with each other
Can python even hope to do a fraction of that, still be fast and differentiate through everything?
I'm fairly certain everything you said is possible except for custom CUDA kernels in pure Python. You'd have to write the kernel in C++ and use it in your TensorFlow/PyTorch code. [0][1]
> It's definitely not possible. These have to be rewritten with the specific autodiff /ML framework in mind.
I don't understand why it's not possible. You're asking if it's possible in the language. I don't see anything stopping you from, as you say, writing your own framework, or simply doing it in TF. Perhaps my ignorance of Julia is showing :)
Admittedly it is a bit hard to see why Julia is so different in this area if you have not spent some with it. It took me a bit time as well to see.
But there really is a profound difference. The thing is that Python is really just a language to rearrange networks of nodes written in C++. A node written for one ML framework will be incompatible with a node written in another language. If you want them to work together you need to spend a lot of effort gluing them together, and most likely the performance benefits are lost.
This is very different in Julia because you are using pure Julia code. What is specialized C++ nodes in Python, are for the most part just plain functions in Julia. There is nothing special about them. That is why they can be used anywhere.
In Python frameworks each of these C++ nodes need to contain information about how auto-differentiation is done. That is why they are tailor made for one specific ML library.
In Julia the whole automatic differentiation stuff is perform code transformations on regular Julia code.
Are you familiar with LISP, and the idea of code as data? That is what we are talking about here. A regular syntax graph of regular Julia code gets manipulated to do autodiff. You are not organizing C++ nodes in a network.
> A regular syntax graph of regular Julia code gets manipulated to do autodiff. You are not organizing C++ nodes in a network.
That does sound very different than Python. Admittedly, I haven't used Julia, because I never had to for professional purposes. For personal projects, I use PyTorch, because I find it easy to use, intuitive, and I have a lot of experience with it. Maybe one day I will be forced to use Julia and will appreciate what it brings :)
That was a great explanation that makes me appreciate Julia more, thanks.
I don’t much use Julia, but it is a beautiful language. I started a tiny side project for using Julia for non-numerical things like querying SPARQL endpoints, text processing, etc. Julia could be a universal, use for almost everything language.
>I don't understand why it's not possible. You're asking if it's possible in the language. I don't see anything stopping you from, as you say, writing your own framework, or simply doing it in TF. Perhaps my ignorance of Julia is showing :)
It's not that it's not possible. Yeah, you could always "write your own framework".
It's that it's not readily available by the language, ready-made to compose, without resorting to external libs on C/C++/Fortran, and so on.
So, "not possible" not in the sense that "it's not possible to write a program that solves the halting problem". More like "it's not possible to write a web app in assembly". Which means, yeah, it's possible, technically, but you wouldn't like it, it would be full of holes, a pain to maintain, and not a good time investment.
Let's say you write some code that handles numbers differently, say a library that implements quaternion math. For doing graphics manipulations. Then you try to use it with tensorflow maybe for some ai-driven optimization. Is it going to work? Probably not out of the box. You will have to do some munging of quaternion data type to shove them into tensorflow tensors. Do the same thing in Julia with Julia's flux, it probably will, and maybe even with the gpu That's the difference.
If you're rewriting everything to get the behavior rather than just hooking together preexisting pieces, that's not what's generally meant by composition.
They key concept that allows for this high degree of composability is multiple dispatch; it’s like if you were able to define trait for any function on any parameter type or to overwrite any existing function to specialise/generalise on one or more parameter types. Ie instead of wrapping different libraries to glue them together to work on your data, you update existing functions to accept your data, ie you blend your thing into existing things instead of wrapping. And once blended your data works not only with that library but also with libraries that use this library without extra effort and with very terse final code.
The Julia approach gives quicker development. Some of these ML libraries such as Flux are so small that almost anyone can read the code and learn how it works and make modifications. It is not for anyone to jump into TensorFlow.
The irony here is that tiny Julia libraries give the same power as much larger Python libraries which require highly specialized and trained developers to evolve and maintain.
With Julia it is much easier to grow the eco system because you can make lots of relatively small packages which can then be combined in almost any way.
The challenge for the Julia community today is really to make people new to the environment aware of this.
I have encountered people who thought Flux couldn't do anything because it was so small. He was so used to the huge monolithic libraries in Python, that it did not occur to him that adding something like an new activation function is literally one line of code. He is used to that requiring adding various C++ nodes. Wrap those up and God knows what more steps you need when you extend something like TensorFlow.
I don't think most people realize how powerful Julia is. They are not used to being able to combine libraries the way you can do in Julia. A big part of the effort for the Julia community will simply be to write more tutorial and introduction which better introduce beginners to these kinds of capabilities. Once you know Julia it is pretty obvious.
But grab people's attention and make them realize Julia could be a solution to their problem, I think one needs a lot of sort of shallow and quick intros to these kinds of things.
> The challenge for the Julia community today is really to make people new to the environment aware of this.
^This. As we're seeing here in the discussion thread Python folks don't really realize what they're missing when we're talking about composability of of libraries in Julia - it's kind of difficult to explain the impact on productivity without people actually trying it out. I think Julia and Flux are getting to the point where they're quite useable and comparable with PyTorch. Also some of the Neural ODE stuff and differentiable programming in SciML seems just a lot easier to implement than it would be in Python.
From the point of view of a DL researcher, you’re able to easily implement your own custom layers. From the point of view of a framework developer, you get much faster development time. From the point of a more normal user, you hopefully get more features faster because the jobs of the framework developer and DL researcher have been made easier.
> I've been using Python for ML for the last 3 years and I've never felt this way. It might be that I'm not all about the hip new languages, but I don't really see the benefit of making Python more ML/Haskell-ish.
Julia is not ML-ish. In fact, at first blush, Julia reads very similarly to Python. You even get list comprehensions in Julia!
> Sure, the code that you used to implement your model may improve slightly, but I don't see that code improving significantly. The fruit of your labor is usually a protobuf file (encoding the TensorFlow graph) or whatever your framework uses to encode the model you built. The code actually surrounding it is very minimal for most use cases.
It's really about breaking down the compositional boundaries here. Upthread, someone talked about how they could play around with Bayesian Nets by simply mixing Flux.jl (the differential programming library) with Turing.jl (the Bayesian programming library). Mixing Tensorflow/PyTorch with PyStan, for example, can be a nightmare. That's also why there's so many implementations of probabilistic programming frameworks (Edward, PyMC3, PyStan, Pyro, etc); they all use different underlying libraries. In Julia they just all compose together.
> I don't see why you would compose a TensorFlow model with a PyTorch model without incurring tons of overhead in your application around gluing these two things
I find the change to be illuminating. Before I started spending more time with Julia, I would often do derivations on paper for more experimental work, and then implement it using TF/PyTorch in Python from scratch, reading other code where I could for some help. In Julia I can import a library and I'm ready to go. It feels just like working with math itself.
Julia also lets you compose long trains of operators together. That also helps when I'm doing long sessions in the REPL exploring data. It lets me define a few functions in a file, import the file, and just pipe (Julia has a pipe operator which is just function application) output between functions to plot data or calculate errors.
Moreover Julia is a lot more performant for REPL work than Python. In Python I'll usually work with a subset of a dataset to get an idea for it, then run a file with code to process the entire dataset. In Julia I can often prototype and run the algorithm in the REPL itself.
I also want to stress that Julia is quite a bit faster for REPL development, both in terms of raw speed
I largely agree, Julia is such a cool language and had so much potential. It definitely surprised me when they went with Swift instead, but realizing that Chris Lattner worked at Google at the time explained a lot. Unfortunately, every time I try to get into Julia, it just feels awkward coming from Python and a bit like stepping back in time.
The stupidest, (stupidest in the sense that I really wish they didn't bother me, because they're silly things!), things that bother me are:
1. The 'end' keyword. It's everywhere! Loops, if-else statements, function bodies. I mean I understand why it's helpful for parsing and providing more info to the compiler, but honestly, I would've rather we just stuck with curly braces '{}'! At least it's fewer characters to type and it just feels less crowded on screen while reading. It feels like such a petty complaint, but honestly, it feels like I'm writing Pascal or Matlab all over again. Which leads me to my second point.
2. The default choice of 1 based indexing. I'm not going to go into it, because plenty of people before me have beat this dead horse already[1][2][3], but I can't help but be saddened by this choice and its implications. It's important to acknowledge the fact that Julia started as a competitor to Matlab and Octav, so it makes sense from that perspective. However, it could have been a much more general purpose language, with huge performance benefits over popular interpreted languages like Python, JS, and Ruby. It could have been a unifying force in the scientific computing community, bridging the gap between R and Python users with a greenfield approach that would have been a force to be reckoned with. Instead, rightly or not, it's viewed largely as 'just' a matlab replacement.
Now, regardless of whether 1 or 0 based indexing is truly 'better' or the 'end' keyword is no big deal, the reality is that there's a huge demographic of potential users that won't buy into Julia, because it doesn't quite feel as ergonomic as Python/JS/Ruby and won't take it seriously as a general purpose language, because it looks/feels like Matlab and they only use 'real' programming languages. Again, I'm not saying this is right, but it is the reality we're faced with and it just feels like a huge missed opportunity and bums me out.
Language aesthetics do matter. Broadly it seems like we've accepted that "c like" syntax - including brace-delimited blocks and zero-based indexing - should be the norm for programming languages. Any language which goes in a different direction should have a very strong reason to do so, because any deviation will be an obstacle for adoption.
I share your frustration with the `end` keyword - it's just needlessly verbose, and for something used so frequently it makes sense to be as terse as possible.
I have some similar quibbles with Rust syntax: I know it's a minor issue, but I'm really disappointed that snake_case was adopted. It's just ergonomically inferior to camelCase in every measure. It's one more character to type for every word, and on top of that, on a US keyboard, the underscore character is a pinky key way far away from the center of the keyboard. Making this one of the most frequently typed characters in the language makes no sense.
snake_case is a lot more readable than CamelCase, which is a huge benefit ergonomically. The keyboard layout is no big deal, one can easily change it or use shortcut-based autocomplete. Rust does use CamelCase for type identifiers, trait identifiers and enum constructors, and the contrast with snake_case also aids readability.
I disagree that it's "a lot" more readable. I have read a lot of camelCase code in my lifetime, and I can count on zero hands the number of times I ever had an issue parsing code due to the use of camelCase.
Keyboard remapping seems like an extreme solution, and I don't want to train my fingers in such a way that when I sit down at a different workstation that doesn't have my .vimrc I can't type rust anymore.
You don't need snake_case for contrast. You can use lowerCamel and UpperCamel to denote the same levels of significance. SCREAMING_SNAKE is fine for constants because they don't get used all that often, but rust maps the hardest-to-type case to the most-frequently-used, which in my opinion is an ergonomic failure.
It really depends on your naming conventions. If you consistently treat acronyms in the same way as regular words (i.e. "XmlReader", "IoStream" etc), then there's no practical difference with "xml_reader" and "io_stream".
Can you give an example of where you find this "terrible?" I have never had an issue with it, and even if you don't like it aesthetically, I think the fact that you don't like looking at certain names (a subjective complaint) is objectively easier to overcome than being required to type one of the least ergonomic characters on the keyboard over and over, which is a matter of physical reality.
So would you seriously suggest that programming languages should be optimized for extremely rare, niche keyboards rather than the standard that ships with virtually every laptop which nearly 100% of coders will be using?
`end` marks of blocks of code much more clearly than curly braces. It is also requires fewer keyboard taps. To type {} requires holding down four keys in total (shift-[ twice). end is just three key strokes.
But more importantly is saves curly braces for other uses where it is more needed.
Typing braces is basically a gesture. You hold the shift key with left pinky, and roll right ring finger pinky over open and close brace, then tap back arrow and return to get you inside the block. End is three distinct key presses.
Also braces have a nice symmetry, which is also convenient for parsers and tooling to count opens and closes.
I also think "end" is just more visually noisy. Braces are a symbol, so it's easy to filter them out when you're looking at code, but a trail of "end"s takes up more real-estate than is semantically justified.
It seems odd to say "we just write" for some arbitrary choice with its own tradeoffs; {} mean a set in Python or a literal array initialiser in C# or a scriptblock in PowerShell or a dfn in Dyalog APL or a JSON dictionary, etc. With only a limited set of whatever symbols happened to end up on a US ASCII keyboard 40+ years ago, there is heavy competition for them, it's why J broke the symmetry of pairs and went for two-character symbols, why PowerShell went for comparators like -gt and -lt, why C# has => which is completely different from <=, why << is not a double application of less-than but is bit shifting ...
To act like there's an objectively better use for {} is to miss how many tradeoffs there are, and how many things people want symbols for out of a standard keyboard.
< and > are comparison operators for checking if something is less than or greater than something else.
This is okay in a static language where there are special slots for a type where only a restricted subset of things can happen. You'd know at the parser level that the < and > are referring to the type meaning or the operator meaning.
In julia, types are values and values can live in types, and arbtrary operations can happen in a type.
E.g. I can write Tuple{1 < 2} which just becomes the type Tuple{true}, i.e. a Tuple Type with paramemter True.
Or you could just not use them for type parameters. That's also a solution for what it's worth. I think the curly braces look better as type params anyways imo.
That's not a particularly convincing blogpost - for example you don't need square brackets for indexing, one of Ken Iverson's changes going from APL to J was to see indexing as just another function application and remove square bracket syntax from it. Grouping arithmetic expression is worsened by complex precedence rules, APL always evaluating right-to-left and all operators having the same precendence changes how that works as well (even though it does have grouping parens too). Then complaining that "<>" are used in mathematical expression, but not being annoyed by "main()" in a function declaration or the "?" and ":" used in the complex type expression?
Seems to me that "end" is one of the worse available choices; if we're going for keywords, why not make them "endif", "endfor", "endwhile" (or "wend") which makes it explicitly clear what each of:
Re: 0-indexing vs 1-indexing. If you use 0-indexing, you turn off a lot of non-engineering scientific programmers. My personal experience is that 0-indexing is better for more engineering applications, while 1-indexing is better for math. I'm a weirdo in that I don't seem to mind either one though.
> you turn off a lot of non-engineering scientific programmers.
I think you mean "non-CS engineers". CS is a minuscule branch of engineering. Plenty of chemical, mechanical, civil (and so on) engineers had their whole education doing maths and programming with 1-indexing.
> I'm a weirdo in that I don't seem to mind either one though.
You are not, as an outsider this is one of the less appealing parts of practical CS, endless bickering about non-substantive issues which most of the time boil down to a matter of personal preference (see also tabs vs spaces, vim vs emacs, react vs vue, golang vs rust and on and on and on...)
I mean, Julia is flexible enough that it's pretty easy to implement arrays with different indexing schemes that have the same performance as built-in arrays: https://github.com/JuliaArrays/OffsetArrays.jl
Zero based indexing came from early requirements to address things in arrays using offsets.
Subsequent languages don’t have this issue, and when you think about it, it causes a weird disconnect: 1-based indexing corresponds exactly with how you think about and see elements in a collection. 0 based indexing-despite how comfortable one gets with it requires additional arithmetic.
It’s a relic from the past, and unless you’re doing the very specific thing of indexing via offset instead of position, there’s no reason to hang onto this anachronism.
Most math books are written in a way where 1 is the first element. So if you take math examples and translate to code, it works more naturally.
Also this is how you talk normally. You don't talk about the zeroth-column or zeroth-row in daily speech. You talk about first column and first row.
Only reason 0 based indexing make sense to me is because I began programming as a teenagers and was forced to get accustomed to it. But I remember struggling with it. Yes when working with memory, pointers etc it is more elegant. But if you are not, then I think 1-based indexing looks better.
As a trivial example, if you want to get the sum of the first n numbers, that's `sum(range(n+1))` in Python. Or if you want to get all the prime numbers <= n, find the triangular numbers, etc. In general you end up with a lot of `n+1`s and it's easy to lose track or miss some, and end up with a silent, non-crashing error that produces the wrong result because you're accidentally leaving out the last element of your input.
Linear algebra is the most practically consequential one - and suffice it to say, it’s not a coincidence that BLAS, LINPACK, and LAPACK were all written in one-based languages. Matrix indexing is one-based, and beyond the issue of translating notation from equations, IIRC there is even a slight performance difference in some cases.
I believe what it really comes down to is that, just as zero-based really is more natural for offsets (zero offset means the first position, great for pointers), one based really is more natural for counting [and _sets_] (when your index is n that means you have counted n elements up until now).
I think your pain points with Julia are real, but they don’t bother me too much. I mostly use Lisp languages, enjoy but am not so good with Haskell, so I am very flexible in syntax. I shared the dislike for 1 based indexing, but when I recently started evaluating Wolfram/One and Language, my limited experience with Julia and Octave made Wolfram’s use of 1 based indexing OK.
I love experimenting with programming languages but I don’t like recommending languages just as I don’t like recommending movies: I figure that everyone has their own tastes, and that is a good thing.
If it's just the syntax and muscle memory you are quibbling about, you can use python to write code and then transpile it to Julia. Link to github in my other comment in this article.
> I guess it goes to show that the governance and availability of a language have a lot more to do with adoption than the merits of the language itself.
Nearly every Google project eventually gets abandoned because the engineers that made it get promoted and move on to their next promotable projects. The root cause is the ongoing failure of Google's leaders to align employee incentives with the long-term interests of the company or human society.
Even if S4TF had become popular, I expect it still would become neglected and rot like most Google projects.
>I think Serverless would be another ideal use-case for Swift, where the productivity, clarity and correctness tools it offers would be a huge benefit.
Oh yes, I would love to have Swift framework for Firebase on server, not only for iOS. Its atrocity to write the server logic in NodeJS after making the user App in Swift.
Every time I switch from Swift to JS I deeply appreciate the beauty of Swift. On swift I do much less silly mistakes where the NodeJS code feels like up and running by some miracle and everything can collapse due to something I did but could't catch it before something horrible happens.
Could it be that you have more knowledge of Swift and don't care/want to invest in understanding Node.js/JavaScript? I really enjoy writing backend code in TypeScript for Node.js. But loath having to write Swift code for iOS. I know a big part is my unwillingness to invest time in Apple's ecosystem and properly learn Swift.
I used to work with web technologies but I got overwhelmed with the complexity of the tooling.
For example, TypeScript is nice but it's not a "real" language in a sense that you can write something in it and expect it to work when it's fed into the compiler or interpreter.
To use it, you need to set up an environment where all the moving parts are working in harmony and the code you write in Typescript is transcribed into the actual code that NodeJS and the browser will understand and that is JS. The debugging of unusual bugs and doing something non-conventional instantly becomes multiple times harder because you have layers and layer over the actual stuff that is executed which means you loose the browsers or runtimes debug capabilities since those would rarely tell anything useful about the pre-transcription code.
Sometime you have a library where someone a few years back tried to do something similar to your idea but never had a complete solution and stopped working on it and when you try to benefit from this work to build on top of it, you find out that you need to modify you working environment to support some spacial case of legacy code. You do that and it all falls apart, now you must choose to try to fix it or give up and restore your setup. It's just horrible.
The greatest thing about working with Swift on iOS is that starting something new is as easy as creating a new Word document so you can start working on your idea right away without worrying about the tooling and see where it goes. On the JS world, I used to be exhausted and loosing all my motivation by the time I have my environment ready to go. Templates and project starters tend to be no good because they are all opinionated that you must be creating a list or a blog app. They all need extra work for a good clean start and that clean start often is too customised to be used as a general use case template.
There are so many solutions for the same problem in the Web Tech world, each comes with its own shortcomings and taht's how someone else creates the next framework. On iOS it's easy, the answer is UIKit and SwiftUI if you feel more adventurous and bleeding edge.
I agree about TypeScript, however this paragraph also describes Swift to a T:
> Sometime you have a library where someone a few years back tried to do something similar to your idea but never had a complete solution and stopped working on it and when you try to benefit from this work to build on top of it, you find out that you need to modify you working environment to support some spacial case of legacy code. You do that and it all falls apart, now you must choose to try to fix it or give up and restore your setup. It's just horrible.
So far Swift code has not aged well, although it's gotten better during the last two years. The perpetual brokenness has moved on to Swift's tooling (binary stability, SwiftPM etc.) which wouldn't affect serverless.
Deno seems to address some of the shortcomings of Node.
I haven't used it on a "real" application yet, but the built in support for Typescript, formatter, linter etc. reduce the complexity of tooling, especially for server code.
Which is the main reason why although I like Anders's work, I keep using plain old JavaScript, hoping that one day browsers at least support WebIDL annotations natively.
They have good defaults and Xcode is a good IDE with full support of these frameworks, Swift is a good language with sensible conventions so you don't really need to learn that much.
When you don't know something you can start typing keywords and see what Xcode autocompletes for and read the explanation about that function from the documentation that is provided to you right there. It's not perfect, sometimes there's no documentation but it's not a big deal as you can look into source files or try it out to see how it behaves.
It's not the same because in JS you usually would use a large number of libraries with varying degree of documentation and different conventions. Also, you would loose autocomplete when you expect to receive something as a parameter instead of defining it in that closure.
The basic language API is not that large, the benefit of autocomplete discovery comes when you have autocomplete on custom types of a specific framework accompanied with proper documentation.
I’m sure NestJS, which sparked this thread, has very complete type defs, as does any major framework out there.
You seem intent on dismissing something you obviously haven’t tried in the past couple years; next time I’d suggest a simple “I don’t like it” instead to save us all time.
Sorry you feel that way, it feels like angular for the backend to me which is nice, it follows logical patterns, loading modules allows us to configure micro services from a single mono repo, decorators are nice, encourages use of RxJS which is great and it still allows access to the underlying express app if needs be.
Might not be to everyone’s tastes but it feels like a solid tool to me.
Yep, Vapor looks nice but I want to use Firestore, so I need a reliable integration of Firestore with Vapor and there isn't one. I rely on listening for live data entries and updates on the Firestore, processing those and putting them back.
I'm doing it on NodeJS currently and I hate it. I used to like JS but Swift showed me that there's more, there's beauty in the world.
The only thing I miss on Swift is async/await and that's coming.
Yeah. I think unable to merge back into Swift mainline really makes the adoption harder. I made some efforts to having PythonKit / swift-jupyter run smoother and it feels really good as a replacement for Python: https://liuliu.me/eyes/data-science-setup-with-swift-in-2021...
Luckily, I don't think Swift on other platforms is dead. SwiftNIO seems worked quite well as a project. Bazel support is solid. They still release Linux / Windows Swift toolchains.
Also shameless plug, I did implemented my own deep learning framework in Swift: https://libnnc.org/s4nnc/
I agree and am also disappointed. On the other hand, I really don't miss Swift outside of iOS because of Rust and Kotlin.
I wonder if Kotlin is too far from "arms length" from the low level details in your mind? Because other than that, I actually prefer the language over Swift, generally.
Kotlin is one of the very few languages to have a compiler plugin API, this allows Facebook researchers to make it auto-differentiable on demand
https://news.ycombinator.com/item?id=25128251
While people can argue swift could be Golang, it's built with focus on iOS ecosystem. You have to manually handle reference cycles. The language is quite complex as well, and compiler is among the slow ones. Copy-on-write is generally despised these days even in C++. Just to keep in mind that this may give space for a next expressive general purpose language that improves in these aspects, we can be optimistic.
> Is the Python type system improving or is it the type annotation system that's improving?
Both. (The type annotation system is deeply tied to the type system, since the latter is what is statically verified, but there are improvements both in what can be checked—the type system—and how that is expressed/annotated.)
It's important to understand that the type systems for Python are not really an inherent feature of Python, but the systems enforced by each of the static checkers (which tend to adopt each other's advancements and drive syntactic support into the core system of annotations, so there's a common baseline advancing with the individual systems). A recent advancement in one of those type systems would be pylance (actually pyright, since that's the typechecker for the pylance language server) support for higher-kinded type variables [0]. Mypy, I believe, has identified this as a thing to support, too.
An example of an improvement in annotations specifically to enable type system improvements that require supporting syntax improvement is PEP 646 support for Variadic Generics. [1] Even though it is in draft, pyright has already landed support for it for the type system improvements motivating it. [2]
Oh, okay, the problem here is that we have different understanding of the relevant meanings of “type” and therefore “type system”, which is unsurprising because the term has many different uses in programming.
I am using “type” in the standard computer science sense of statically enforced invariant and “type system” as the system of statically enforced invariants, distinct from annotations which are the syntax by which types are expressed.
You seem to be referring to “types” and “type system” to refer to something else, perhaps some aspect run-time behavior / “dynamic typing”. whatever it is, there's probably also some improvement in that, too, but given that the original concept of the uphtread comment about type system improvements was about type safety, which is what the system of types that are statically verified addresses, it's not really a germane issue.
There's multiple aspects to what a type system can entail.
There's dynamic/static typing which is what most of the thread is about. Python is dynamically typed , which makes it really easy to pickup and make stuff in, but you get issues like runtime errors/bugs since there's no inherent type checking other than attribute access failure, or explicit isinstance calls. To the best of my knowledge, there's no advancement here in Python since it's antithetical to the language design. It is strongly typed however which is great, but not really part of the discussion since it's being assessed in the thread on the virtues of static/dynamic typing and not weak/strong typing.
Type annotation helps partially here since you can statically check the types notated post facto but this is not the type system. The type system in a language is a contract by the compiler/interpreter to enforce type safety. Annotations have no contract for correctness of description or even guarantees of being checked.
This is why I'm trying to draw the distinction between the type system vs the type annotation system. The type system itself hasn't improved much/at all in years by design. The type annotation system has however improved significantly.
> Type annotation helps partially here since you can statically check the types notated post facto but this is not the type system.hatever a static type checker enforces is exactly a type system (and the only thing that is), in the standard CS sense.
The fact that the type checker isn't integrated with the compiler/interpreter is a different design choice from other systems, but it doesn't change what it is.
Effectively, mypy (and this was explictly what mypy was originally envisioned as, as is still reflected in its domain name) and other typecheckers for Python create separate statically typed languages with the same relation to Python as TypeScript has to JavaScript, except that source code in those languages is also valid Python source code with identical runtime behavior. That is, each typechecker creates a new language that is a statically typed strict subset of Python, with its own type system (most of which is common across the whole set of languages, but there are slight variations as each are ahead of the common baseline in different ways.)
> The type system in a language is a contract by the compiler/interpreter to enforce type safety
No, it's enforced by the typechecker. Now, in most languages implementations that have a type system, that's part of the compiler, or more rarely interpreter, but there is no inherent reason it needs to be.
> This is why I'm trying to draw the distinction between the type system vs the type annotation system.
Yes, you've invented artificial nonstandard terminology here that is clearly not the terminology that was being in the post you argued against, in order to make a really extended argument that vigorously exercises the fallacy of equivocation and badly misses the point of the post it sought to dispute.
To the extent there is a legitimate point here it boils down to “Strictly speaking, Python remains untyped, and what is advancing are the type systems of a family of statically typed languages built around Python, that are strict subsets of Python, that leverage feature built in to Python specifically to enable such languages, and which are developed in close collaboration with Python, in some cases sharing core team members.”
The lack of algebraic data types is what python gets criticisms about. That situation is improving, but I would have preferred match to be an expression rather than a statement.
I get that you're not particularly excited about gradual typing and the fact that type checkers are optional and live in a separate ecosystem from the interpreter/language runtime.
I actually like the fact that python code without type annotations continues to work as before and allows for easy transcription of ideas into testable code.
If a certain use case requires "forced" type checking, it shouldn't be hard to write a lint rule that disallows code from being committed if there are type checking errors, including missing type annotations.
The issue though is that Python's type annotation is not a substitute for type checking. Annotations are optional and can be inaccurate.
That's fine, this is part of the design of Python. However strict type systems are a benefit of other languages, that may not fit Python. Type annotation shouldn't be considered an alternative to a type system.
> I think Serverless would be another ideal use-case for Swift, where the productivity, clarity and correctness tools it offers would be a huge benefit.
I agree, but sadly none of the big cloud providers has any interest in pushing it - Google's got Go, AWS and Azure seem focused on Typescript.
I'm running Swift server side on Azure. You simply package it in a docker container and run it in an appservice. The only problem really is not having native blobstorage support (like AWS S3) but I wrote an opensource implementation that supports the basics at least. Im not going to plug the library here however coz its far from finished :)
Yes theoretically I think there is nothing stopping you from shipping Swift code on AWS. I am pretty sure you could do it in a lambda and you don't even need a docker container.
If you're talking about a container workflow, I'm pretty sure every cloud provider will support this just fine currently.
It fundamentally can't be improved in a significant enough way and still be Python. The more realistic options are to be okay with Python's type system (I personally am) or to look at different languages if you really want something more robust.
I agree with you, although I must say that as an occasional Python user the way type hints are implemented absolutely baffles me. In particular the fact that they are completely ignored by the standard implementation. They're glorified comments.
Given how opinionated the Python maintainers can be, it baffles me that they accepted to get these optional, noisy, half backed type hints into the core language.
In my experience given that they're optional and you'll almost never get 100% of your code and its dependencies with correct and up to date signatures it's just a nuisance. Maybe in the right projects if all the devs are very thorough with them it can be helpful, but that's really not my experience. If at least it triggered an assertion at runtime when the type doesn't match it would be massively more useful. And even then, if you're so thorough with your typing, why not just use a proper statically typed language?
I really don't get why it's even there to be honest.
Type hints are not perfect, but IMO after using them for a while, they are significantly better than nothing.
If you do the legwork, you can get mypy[0] type-checking done on your codebase in a way that is similar to how TypeScript does type-checking. There are stub files that are provided by project maintainers that are then used by mypy to infer things like e.g the return type of a function.
Type hints are also inline in the code, and not technically comments. They can be retrieved from class attributes using facilities from the standard library [1] and can facilitate other tooling that is specific for your project or that are more general.
> And even then, if you're so thorough with your typing, why not just use a proper statically typed language?
That would remove a lot of the benefit of choosing Python. Python is dynamically typed. Type hints make it possible to do type checking statically, but with a lot of extra leg work (as I described above). Making Python itself statically typed is not something that would interest 99% of the Python community IMO.
> the fact that they are completely ignored by the standard implementation. They're glorified comments.
Not quite. You can reflect on and interact with them at run-time, too. This does make it possible to implement run-time checking as a library solution.
I can't speak to why run-time checking wasn't built into the language as a default behavior, and the PEP doesn't explain why (though perhaps the answer is in the mailing list), but one possibility is that it would be wasteful. Most functions that are only prepared to accept certain types already have some form of run-time type check baked in, either explicitly or incidentally, so adding a second check would likely introduce overhead without producing a whole lot more type safety.
> I can't speak to why run-time checking wasn't built into the language as a default behavior
tldr: you other get either huge overhead or need to type annotate everything like it's Java, and it'll still be pretty limited/not super useful
---
there's a bunch of libraries that do this, and they're all fundamentally limited [though i guess some of these may just be failures of imagination on my part]:
- hard to check generics: if it's a builtin, you have to traverse the whole structure and check every element. and for user-defined collections, the programmer has to to manually instantiate every generic like:
CoolSequence[int]([1,2,3])
(which sucks) and every method call/property access needs to be checked
- if your function takes a callback, there's no way to typecheck that callback at runtime (without calling it, at which point it's not terribly useful. which extends to all functions i guess)
- not much you can do with iterators/generators. can't check if something really is an `Iterator[int]` without exhausting it, and e.g. type-erroring after processing 20 elements because it yielded not-an-int isn't very useful
so in the end, even if you wrap Every Damn Thing in a type assertion, you still get the errors very late, not much sooner than your usual "AttributeError: object None has no property named 'foobar'". kind of a worst-of-both-worlds situation
I'm pretty sure the level of type annotation you'd get roped into doing would be about on par with what you get in Java 10: function parameter and return types, but not local variable types.
My view is completely opposite to yours. I find type hints extremely useful and every new line of Python code I write uses them. I finally no longer have to do type checking / duck typing in my head but can simply write the types down. Moreover, an IDE like PyCharm will then do the type checking for me, notifying me of potential errors. (Even if I decide not to use Mypy.) Third, I finally only need to look at the signature of standard library functions to see what parameters they accept / what values they return. Previously, I always had to consult the Python documentation.
Bingo. I think a big explanation of the wide variance on opinion about type hints is due to reliance on tooling, or lack thereof.
Writing type hinted python in Pycharm or Ipython feels like racing in a sports car. Without it, feels like stop and go traffic with the occasional car accident.
I can see how if you are using a WYSIWYG editor, type hints feel like more text for little gain. But even then, it really helps document intent. In fact switching to IDEs is partly what made me go from someone who hated static typing to loving it.
> Writing type hinted python in Pycharm or Ipython feels like racing in a sports car. Without it, feels like stop and go traffic with the occasional car accident.
This is a beautiful analogy!
As for the second part, though, I would say that even if your IDE does not provide automatic type checking, type hints will still add some benefit because:
- without type hints you have to do all type checking in your head (across all stack frames).
- whereas with type hints you only need to do type checking down to the first level of the stack.
To illustrate what I mean, consider the following code:
def bar(param1, param2):
... # Some complicated code from which it is not immediately clear what values are allowed for param1 and param2, nor what the return type of the function is.
def foo(param):
return bar(param, param2="some default value")
foo("something something")
Now, in order to verify that `foo("something something")` is correct, you have to manually look at foo() and then at bar(). Needless to say, this adds a lot of mental overhead to the game. If we added type hints to both foo() and bar(), however, you would only have to check the signature foo() – which, even in IDEs without type checking, usually amounts to hovering over `foo("something something")`.
> Given how opinionated the Python maintainers can be, it baffles me that they accepted to get these optional, noisy, half backed type hints into the core language
Guido joined the Mypy team while he was Python’s BDFL.
Also, there’s nothing half baked about either Python’s type hints or Python’s type system, or it's major type checkers. It's not Haskell, sure, but it's an expressive to system, the typecheckers are reasonably smart, and the annotations are readable and sensible if somewhat verbose; there were some infelicities regarding alternate names for core types in annotations, but that's been improved recently.
> In my experience given that they're optional and you'll almost never get 100% of your code and its dependencies with correct and up to date signatures it's just a nuisance
In my experience they start to provide value in preventing bugs and easing development because of tooling support way below 100% coverage.
> If at least it triggered an assertion at runtime when the type doesn't match it would be massively more useful.
Python’s type annotations are annotations, and are used by some libraries for runtime (validation, serialization/deserialization, etc
) as well as static checking purposes (e.g., pydantic.)
> And even then, if you're so thorough with your typing, why not just use a proper statically typed language?
All a “proper statically typed” language is is a language with a static type checker run ahead of time, which Python is of you choose it to be. There's a lot of code in the ecosystem that is more broadly types than it needs to be, because no annotated code which checkers can't infer anything better for use Any, but that's evolving over time as it is more common for popular libraries to be typed, or at least have typings available.
The chorus of people asking for increased typesafety had become too loud to ignore. Type hints gave them enough support to cover 95% of their needs, which were/are largely organizational rather than technical: developers must feel sure they are using the right classes in their code, pulled from the right places in their project, and don’t have to look up docs at every step (because the IDE will autocomplete stuff). What happens later, the low-level implementation, it doesn’t really matter; what matters is the information about types is surfaced somewhere, so that IDEs and tooling can use it to document projects and help developers.
That’s what type hints do, and yeah, they are basically glorified docs, but integrating docs into syntax is one of Python’s many traditional strengths (see docstrings, doctests, etc). That’s also why they are optional, thank goodness, so people who don’t have big-org / big-project needs can still be productive.
This was always kinda false (type hints can be composed unlike eg docstring param @type). But with the introduction of dataclass, this is empirically false. You can readily hook into the type hints, reflect on them, use it to marshal data, etc. Using __post_init__, you can leverage any kind of runtime checks you want to enforce invariants.
Also type hints greatly help transpilation.
You don't need 100% to reap the benefits. In fact I will annotate small bits of un-typed code and it makes it way easier to reason about.
I don't think that's accurate. Python's type system is largely defined by mypy + the annotations, in addition to runtime python. In that regard, python's type system has been evolving really quickly, and it's becoming something quite interesting. I'm a fanboy of static type systems, and in the last bunch of years, python's type system has gone from just bearable to really useful. I'm excited to see where it goes from here.
The dependence on library authors is always a challenge in any language. You might have one author using `[a]` where another uses `PositiveNumeric a, Fin n => NonEmptyList n a` for the same thing. You can always just annotate whatever the library author used (e.g. they return a list of strings, so you use List[str]).
There are some interesting further add ons that seem very python, allowing you to go further. For example, with a pandas dataframe you can just say your type is a dataframe which isn't so useful, but it's possible to hack your own types onto it in the vein of https://github.com/CedricFR/dataenforce, or use things like https://smarie.github.io/python-vtypes/ to get smarter typing on things the authors didn't type. I expect that trend will continue.
What fascinates me about python's types is actually the very fact that they are bolted on. You have a language that lets you do crazy things and a type system trying to catch up and make it convenient to verify those crazy things. It's a nice complement to the usual developments of verifying all of the things and slowly extending the set of things you can do.
> You might have one author using `[a]` where another uses `PositiveNumeric a, Fin n => NonEmptyList n a` for the same thing. You can always just annotate whatever the library author used (e.g. they return a list of strings, so you use List[str]).
The difference between these two cases is that a function that returns [a] will always return [a] and if you convert it to `NonEmptyList n a` you will be nudged towards handling the case where it didn't actually return n elements, whereas if you annotate someone else's list as returning List[str] then it may well silently not do some of the time, and you won't get an error until a long way away.
I was unclear. I meant the other way around. In the case that the third party code does in fact always return no empty lists of positive numbers, the author might just type it as a list. A third party author in even a language like Haskell won’t necessarily use the bells and whistles of giving very specific types.
And by the List[str] bit, I mean you can annotate it as doing whatever it in fact does. You aren’t protected if you get that wrong, of course, and that’s where things like stubs come in to help.
> In the case that the third party code does in fact always return no empty lists of positive numbers, the author might just type it as a list.
Yes - but as I said, you can be confident that the list type is accurate, and if you want to convert that list to a specific-length list type then you're nudged towards doing it in a way where you handle the case where the list isn't actually the length you specified.
> And by the List[str] bit, I mean you can annotate it as doing whatever it in fact does. You aren’t protected if you get that wrong, of course
Not "of course"; the whole point of a type system is that it's automatically checked and therefore you can have reasonable confidence in the types you're given.
They don't really though? Stubs are just as unchecked as writing the type signature yourself, so even if they're correct at a given point in time, they'll tend to become incorrect for subsequent releases of the libraries they cover.
For what it’s worth, TypeScript had somewhat of the same problem, and yet these days it’s rare for me to find a well known library that doesn’t include type definitions!
> It fundamentally can't be improved in a significant enough way and still be Python.
Why not?
Static enforcement of invariants that doesn't change the space of what is possible in the language when you aren't asserting conflicting invariants doesn't make Python any less Python.
GP was referring to type hints and how they don't provide any runtime validation. There are libraries like Pydantic that let you do some, but nothing built into the language runtime IIRC.
First, managing python dependencies and versions, current system is horrible. Focus on fixing that, then performance. Swift is much faster than Python. Third, Swift is a much advance and intelligent language to program in than Python. When I write Swift code, I feel like a PhD and when I write Python, I'm back in elementary school.
It doesn't matter that "Swift is faster than Python". Your TensorFlow graph is gonna be slow as all hell if it's running on the CPU. The bottleneck here really isn't performance on a CPU. That's a no-brainer for all non-ML applications.
But for applications that require a GPU (i.e, most ML applications) cutting over to Swift from Python will likely win you nothing in performance, and wouldn't be worth it at all.
So one of the big ones is dynamic method replacement. In Swift it's possible to declare a method `dynamic`, which means it will always be dispatched dynamically. It's then possible to load a dylib which "overwrites" the method at runtime, making it possible to implement hot code reloading in an elegant way which is fully type-checked and everything. I am not aware of a way to do this in rust.
Most of the other things I miss are ergonomic:
- Swift has better inference, so for example in a match statement over an enum, in rust you always have to type: `MyLongEnumName::SomeEnumCase`, where in swift you can just type `.someEnumCase`
- Swift's operators for optionals are much cleaner imo than how it's handled in Rust. For instance, the `?` operator is kind of magical in Rust, because it effectively changes the flow of control of the enclosing function. Optional handling constructs in Swift are always local to the statement, which I find to be easier to reason about, and more composable.
- Rust's module system is needlessly complex and verbose, and adds boilerplate and redundant busy work when you want to refactor code and move things around
- Trailing closure syntax is really nice
- It's sometimes useful to have types as runtime constructs as well as at compile time
- I like Swift's more flexible approach to scopes/namespaces. Like in Swift, when I declare a struct, I can just declare the methods inside the struct body without needing a separate impl, which reduces boilerplate. Also for instance if you need to declare a throwaway data type which is only used internally by a couple functions in a Swift type, you can declare it right inside the struct declaration where it's relevant. Rust is more restrictive, and I have to declare types either at the module level, or inside an execution scope like a function body. So with Swift I just feel like my code is organized in a more semantically coherent way which reads like a book, where in Rust it's more organized according to the ceremony which is required by Rust syntax.
I could go on, but basically I just feel that when I am coding with Swift, the syntax melts away and I am mostly focused on the problem domain. When coding in Rust, the syntax is very present, and is a big part of what I am dealing with.
Swift's a delightful language to use. It has a lot of the nice things about Rust's type system, but is a heck of a lot easier to use at the expense of a bit of performance. For a lot of use cases, I think this is a great value proposition. ML/data science is a great example where practitioners could benefit from a more grown-up type system than what Python has on offer, but would still be able to keep low level details at arms length.
I think Serverless would be another ideal use-case for Swift, where the productivity, clarity and correctness tools it offers would be a huge benefit.
Some very interesting things came out of the S4TF project - like the work on autodiff, and python interop. It's a shame the project never really seemed to get legs, it seems like it just languished for years with no clear direction.
These days I do most of my work in Rust, and I'm happy to do so because the tooling, community, and ecosystem is really amazing. But there are a lot of language concepts and features I miss from swift. I guess it goes to show that the governance and availability of a language have a lot more to do with adoption than the merits of the language itself.