Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Mnesia isn't SQL-compatible,

I think this is accurate. There are some references to a masters project to do SQL with mnesia, but afaik, it's not supported or used by anyone.

> can't scale horizontally (each node has a full copy of the DB + writes hit all nodes),

This isn't accurate, each table (or fragment, if you use mnesia_frag) can have a different node list; the schema table does need to be on all the nodes, but hopefully doesn't have a lot of updates (if it does need a lot of updates, probably figure something else out)

> needs external intervention to recover from netsplits,

This is true; but splits and joijs are hookable, so you can do whatever you think is right. I think the OTP/Ericsson team uses mnesia in a pretty limited setting of redundant nodes in the same chasis, so netsplit is pretty catastrophic and they don't handle it. Different teams and tables have different needs, but it would probably be nice to have good options to choose from. There's also no built-in concept of logging changes for a split node to replay later, which makes it harder to do the right thing, even if you only did quorum writes.

> and expects a static cluster.

It's certainly easier to use with a static cluster, especially since schema changes can be slow[1], but you can automate schema changes to match your cluster changes. Depending on how dynamic your cluster is, it might be appropriate to have a static config defined based on 'virtual' node names, so TableX is on db101, but db101 is a name that could be applied to different physical nodes depending on whatever (but hopefully only one node at a time or you'll have a lot of confusion).

[1] The schema changes themselves are generally not actually slow, especially if it's just saying which node gets a copy, ignoring the part where the node actually gets the copy which could take a while for large enough tables or slow enough networks; it's that mnesia won't change the schema until all of the nodes are not currently doing a 'table dump' where up to date tables are written to disk so table change logs on disk can be cleared and those dumps can take a while to complete and any pending schema operations need to wait.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: