Thank you for your reply, you raise many good points.
Running those on CG in the runtime allocated would be difficult (loading Theano + Keras + serialized model and weights would take much more than that)
If we are to talk purely about the technical aspect, I think that is very managable in 1 second.
I’m not necessarily talking about deep learning, small to medium neural networks and things like decision forests would be fast to load.
Still I’m not saying that would be an easy task, maybe you’d have to base64 encode and zip your models to fit in the 100k characters limit, but that’s where the fun begins
Other languages would require the same treatment: Breeze for Scala, D4J for Java, … I think I’ve heard that even C/C++ doesn’t have all the possibles compiler optimizations enabled. And that would be a lot of work to maintain
I’d just like to point out that if one of the goals of CG is to help people learn new languages, one could argue that modules are at the center’s of Python’s philosophy. So it would make sense to have the most mainstream modules available. (I mean, you have to know that one: https://xkcd.com/353/)
I’ve been on CG for only a few months and I’ve reimplemented from scratch more algorithms than I’ve ever done in the past. It felt great and I learned a lot. I believe that is a core aspect of CG, contrary like Kaggle for example which is much more focused on the competition.
I partially agree with that.
As much as this is true for most algorithms (forests are espacially fun to implement), is it reasonable to ask people to re-write their version of keras? (I mean, I will if I have to, but I’m not looking forward to it )
Anyway, if this is CG’s philosophy, I’m ok with it.
Thank you for the Q-learning links, it looks like a very intersting approach