Skip to main content

Practice Rounds

Practice rounds let you test your model with instant feedback — no staking, no 20-day wait.

How It Works

  1. Download a historical feature dataset (e.g., Q3 2024)
  2. Build your model and generate predictions
  3. Submit — scored instantly against known historical returns
  4. See your IC, rank percentile, and detailed breakdown

Why Practice First?

The live tournament has a 20-day resolution window. If your model is bad, you've staked tokens and waited weeks to find out. Practice rounds let you iterate fast:

  • Test different model architectures
  • Experiment with feature engineering
  • Validate your submission pipeline
  • Build confidence before staking real tokens

Running a Practice Round

from signalnet import Tournament

t = Tournament(api_key="your_key")

# List available practice datasets
datasets = t.practice.list_datasets()
# [Dataset(id='2024-q3', stocks=503, features=98, period='Jul-Sep 2024'), ...]

# Download practice features
features = t.practice.get_features(dataset_id='2024-q3')

# Build model, generate predictions...
predictions = your_model.predict(features)

# Submit for instant scoring
result = t.practice.submit(
dataset_id='2024-q3',
predictions=predictions
)

print(f"IC: {result.ic:.4f}")
print(f"Percentile: Top {result.percentile}%")
print(f"IC: {result.corr:.4f}, TC: {result.tc:.4f}, MMC: {result.mmc:.4f}")

Practice vs Live

PracticeLive
FeaturesHistorical (known period)Current (unknown future)
ScoringInstant20 trading days
StakingNone100–10,000 SIGNAL
PayoutsNoneBased on score × stake
LeaderboardSeparate practice boardMain leaderboard
Overfitting riskHigh (data is historical)Low (future is unknown)
caution

Practice round performance does not guarantee live performance. Historical datasets are susceptible to overfitting. Use practice to validate your pipeline and get directional feedback, not to optimize your model.