OHwA S02E08
From July 30, 2020 / Interview with Bouwe Ceunen
The full interview with Bouwe will be uploaded on YouTube.
Questions from Slido
Round 219 looks like a big burn for most models - with just one week difference between rounds, do we have any clue as to why that might be?
Regarding the one week time between rounds, Arbitrage said that there would be similarities week to week. This means that tournament participants are making predictions on potentially the same group of stocks with only a week's space between the data. If the same conditions are present within the data, you'll probably get the same result.
"But what really matters is how [the round] resolves." - Arbitrage
Wigglemuse: I don't know, I've never been invited.
Arbitrage: Consider this your formal invitation, sir, will you join me next week for Office Hours with Arbitrage as my special guest?
Wigglemuse: Yeah, sure.
Author's note: [He did.](link to Episode 9)
When do we get the Corr + MMC leaderboard?
Slyfox: It's going to come closer to when the reputation bonus gets phased out.
Arbitrage: So like six / seven weeks it sounds like. Cool.
Slyfox: There might be a secret combination of keys and clicks that you can do to bring it forward, though.
Is MMC performance indicative of future MMC performance?
"I'm going to go with a partial 'yes' on that." - Arbitrage
If the models are stable, then yes. But if a significant percentage of users are changing their models, then potentially no. Arbitrage said that, given how much effort is required to create ten unique models, the models are generally stable and past performance is going to be fairly indicative of the future.
Arbitrage said yes and no.
No: Leverage isn't burning as much, but it's just straight burning 🔥. The consistent burns started when he changed his model. For now, he's okay with that because the model is still beating Integration Test in the same time frame.
Arbitrage's goal with the Leverage model was strictly to not burn as much as Integration Test, which is working. He's waiting to see how Leverage performs with Corr + MMC during an earn period to gauge whether or not his upside is constrained as well.
There's a global stake limit of 200,000 NMR- if we exceed this, payouts will be reduced. I suppose burns and negative payouts would be reduced, too, is that right?
Slyfox: What we do every Thursday is look at everyone's stake and add it up. If it's above 200,000, then we pro-rata everyone so that your stake value is a little bit lower than what you actually stake. The amount that we pay or burn is pro rata'd in the same way... so if one day we wake up and there's 400,000 NMR at stake, then everybody's stake value would be half of what they actually put down. The value we use to calculate burns and payouts would be half of what you put down.
Instead of reducing payouts, could it be an alternative to define individual stake limits (e.g. 10,000 NMR per user).
Arbitrage pointed out that this runs the risk of people fraudulently creating new accounts.
Wigglemuse: I know that one! There was a glitch last week where Integration Test didn't submit and it's late and now it's go a penalty. If you look a week ago, they were the same rank, same score, now they're diverging in the last five days.
Any tips on how to lower auto correlation? What is a good / reasonable value?
"If you try to target all of the metrics, you're going to fail on all of the metrics." - Arbitrage
Arbitrage urged caution, and noted that there has to be some sacrifice somewhere. When Arbitrage was testing his most recent iterations, his process involved scoring the models and when one had a low auto correlation, if all of the other metrics looked good, he kept that model. He didn't try to target auto correlation because he firmly believes if you try to target all the metrics, you'll fail on all the metrics.
Arbitrage's advice is to focus on one metric and run the model multiple times, changing the random seed a few times until you find one that makes everything a little bit better. He added that if your model consistently has high auto correlation, that might be a sign that something is wrong.
For each model, would having the beta coefficient to the meta-model variance of accurate target predictions be a useful metric to have? Similar to smart beta to the SPY.
What could be a reason for sudden MMC greater than correlation?
This could be caused by your model identifies signal in a regime that it didn't pick up during previous rounds. Arbitrage suspects this is a likely cause because MMC scores tend to increase gradually for most people (so far), and a sudden change suggests to him that there was a change in the signal your model identified. "Either the market changed or the balance of the models changed," he said, "and you've caught the signal and you're giving a good contribution."
Did Numerai once experience a user replacing a model for something worse while it actually was the best based on test set performance?
Is it prohibited to have two x10 accounts?
Yes, you can only have ten models total in the tournament.
Last updated