Since OP is the author of the linked project: How do the serialized expressions and the server program fit together and what would this software be used for? Is it still in early development? Also, why not serialize the expressions as simple strings/s-expressions -- what is the upside of using protobuf for this? doesn't it just add unnecessary complexity and overhead?
The software is part of a smart (electricity) grid control platform. I agree that on the github project there is also quite some code that is unrelated to the title of this Show HN post.
The particular thing I wanted to highlight is explained concretely here: https://github.com/niekbouman/commelec-api/blob/master/docs/...
The application of the code is a networked control system, where, roughly speaking, many resources send messages to a central controller. In the light of this "many-to-one" topology, the advantage of the approach taken in this project (essentially, sending an abstract syntax tree encoded in capnproto) over just sending a string, is that there is no parsing-step needed at the receiver, which saves time. (It is a real-time system)
The server program (I assume that you're referring to daemon.cpp), was added as a bridge between the more-powerful capnproto-based message format and a less-flexible but easier-to-use-for-non-experts (JSON-based) interface. However, what happens in this daemon should not be viewed as part of this Show HN topic.
For more details about the project, you could have a look at the following paper (open access) https://infoscience.epfl.ch/record/210555?ln=en
>> the advantage of the approach taken in this project (essentially, sending an
>> abstract syntax tree encoded in capnproto) over just sending a string, is that
>> there is no parsing-step needed at the receiver, which saves time. (It is a
>> real-time system)
even though cap'n'proto advertises itself as a "zero copy" serialization scheme that doesn't necessarily need to copy the data when reading it, it still needs to "parse" the incoming message. and the way your interpreter is implemented I think you still need to switch()/recurse over everything in the proto tree. So I think the main difference here would be switching() over an enum vs throwing an extra strncmp/hashmap access in there when loading the expression from the incoming network buffer.I think cutting that one strncmp per symbol could be a perfomance upside, but I reckon this would be _tiny_ compared to all the networking and evaluation overhead (think low microseconds for a few compares on strings that all fit within the SSO). But might be an interesting thing to benchmark nonetheless, maybe even against another solution that goes all the way and implements a proper micro vm so it can execute "bytecode" directly from the incoming networking buffer without any intermediate at all.
EDIT / side note: just had another look and saw that you are actually storing some symbols as strings in the capnprot messages and then do a lookup in the eval loop, too --- and the symbol lookup is implemented in O(N) by doing a linear scan over the symbol list and comparing the subject with each possible candidate here -- if you care about speed/micro-optimizing I think you can improve this to O(1) or at least logn using a hashmap/multiset -> https://github.com/niekbouman/commelec-api/blob/master/comme...
another thing if you want to microoptmize: you should try to pass stuff by reference instead of copy in your eval loop wherever possible -- removing as many mallocs() as possible from the inner interpreter eval loop should be one of the most low hanging fruits. what stuck out to me was esp. your use of the auto keyword in the eval code -- some thoughts regarding that:
- the "auto" keyword does not automatically take a reference. a plain "auto" may create a copy. use "auto&" or "const auto&" to take a reference
- esp. in the new range based for loops: in the places where you are currently using (for auto item : collection) you are performing a copy of all the items while iterating -- for (const auto& item : collection) will eliminate those copies
EDIT2: Skimmed your paper and saw it included some benchmarks regarding message size. However you are benchmarking your cap'n'proto solution ONLY against MathML, an XML encoded representation (lol).
So I tried one example from your paper -- am I missing something here?
Expression: "P-10+q^2"
- Your capnproto encoding with packing: 82bytes (value from your paper)
- Your capnproto encoding without packing: 280 bytes (value from your paper, actually larger than XML...)
- mathml (xml) without packing: 196 bytes (value from your paper)
- mathml (xml) without packing: 164 bytes (value from your paper)
- string encoding of the same expression ("P-10+q^2"): 9 bytes
[ Hope I am not coming across as too negative, but since you put it in a paper and published it I think it is fair to do some peer review ]