Let me back up and say that I think that your post is great and that it has a lot of great ideas that we should investigate. I also appreciate the fact that you read my reply carefully.
Strict answers to your questions:
- \frac{d ( \mathrm{MMC})}{d ( \hat{M}\cdot\hat{T})} = -\hat{U} \cdot \hat{M} which depends on the sign of (\hat{U}\cdot\hat{M}).
- Yes, the neutralized prediction vector is: \hat{U} - \hat{U}\cdot\hat{M} \times \hat{M}, which is not unitized before being dotted into \hat{T}. However, I have asked whether this is mathematically correct and have not gotten a response yet. So there may be some additional scaling applied. There is certainly the user defined scaling missing but that is applied before adding back to CORR.
People can get worried when folks spend too much time thinking of ways to attack the system. But if you are correct that MMC disincentivizes cooperation, then would it not be a corollary that there is an incentive for attacking the Meta Model?
Less strict answers to your questions:
My main point is, without trying to explore the different avenues of attack hinted at, that when the Meta Model is not performing well it punishes model diversity. Are you aware of this discussion: MMC punishes originality during burns? It seems like enough people were concerned about this that, at least for awhile, it was something to worry about. Then some smart and generous people, whose names I unfortunately cannot mention, started talking about some really good ways to improve models and overall we came out of that burn period and have prospered very well since then, hiccups due to Numer.ai’s experiments notwithstanding.
Your’s and @liz 's discussions have really been thought provoking to me. I am even starting to imagine a DAO of modelers who find a way to cooperate. I think I have some interesting ideas. But I would not be surprised to find that people are already doing something like that, maybe less formally.