Skip to main content

Scoring v2: Why We Penalize Copycats

· 5 min read
SignalNet Team
Building the Signal Network

We found a vulnerability in our scoring system. We fixed it before Genesis Round resolves. Here's what happened and why the new system is better for everyone.

The Problem

SignalNet's original scoring formula was simple:

Final Score = 0.70 × IC + 0.20 × TC + 0.10 × MMC
Payout = Score × Stake × 0.25

Each submission was scored independently. Your payout depended on your score and your stake. Nothing wrong with that — unless you're a sybil.

The Sybil Attack

Imagine Alice builds a great model with IC = 0.08. She stakes 2,000 SIGNAL (the Genesis max) and earns:

Reward = 0.08 × 2,000 × 0.25 = 40 SIGNAL

Now imagine Alice creates 9 more wallets, each submitting the exact same predictions with 2,000 SIGNAL staked. Total earnings:

10 accounts × 40 SIGNAL = 400 SIGNAL

She's effectively staking 20,000 SIGNAL (bypassing the 2,000 max) and capturing 10x more of the reward pool — with zero additional signal quality.

The old scoring system had no mechanism to detect or penalize this.

Why TC and MMC Weren't Enough

In theory, TC (True Contribution) and MMC (Meta-Model Contribution) penalize signals that look like the consensus. A sybil's duplicate signals shift the meta-model toward their predictions, which should reduce their TC and MMC.

In practice, the effect is negligible. With 100 participants, 10 sybil accounts shift the meta-model by ~10%. With the old weights (TC 20%, MMC 10%), 70% of the score came from IC — which has zero sybil penalty. The math didn't punish copycats enough.

The Fix: Two Changes

1. Pairwise Similarity Penalty

After all submissions close, the scoring engine now computes the Spearman rank correlation between every pair of submissions. If your signal is too similar to someone else's, you get penalized.

similarity = |spearman_correlation(signal_A, signal_B)|

If similarity exceeds 85% (configurable per tournament):

diversity_factor = max(0, 1 - max_similarity)
adjusted_score = final_score × diversity_factor

The penalty is graduated ("soft mode"):

SimilarityDiversity FactorScore Retained
0.801.00100%
0.850.1515%
0.900.1010%
0.950.055%
1.000.000%

An exact duplicate receives zero payout (beyond stake return). A 95%-similar signal keeps only 5% of its reward.

This makes the sybil attack from our example earn:

Original account: 40 SIGNAL (full reward, diversity_factor = 1.0)
9 duplicates: 40 × 0.0 = 0 SIGNAL each (diversity_factor ≈ 0)
Total: 40 SIGNAL instead of 400 SIGNAL

The sybil invested 10x the capital for 1x the return. The attack is now economically irrational.

2. Rebalanced Scoring Weights

We shifted from IC-heavy to TC+MMC-heavy weights:

MetricOld WeightNew WeightChange
IC70%40%-30%
TC20%35%+15%
MMC10%25%+15%

Why this matters:

  • IC measures raw prediction accuracy. Everyone with a decent model achieves similar IC. It doesn't reward uniqueness.
  • TC measures your marginal contribution to the aggregated signal. If you submit what everyone else submits, your TC is zero.
  • MMC measures your performance after removing the meta-model's influence. It rewards orthogonal insights.

With TC + MMC now at 60% of the final score, even without the similarity penalty, a copycat's score is naturally lower. The two mechanisms reinforce each other.

What This Means for Contributors

If you built your own model: Nothing changes negatively. Your diversity factor is 1.0 (no penalty). The rebalanced weights might actually help you — if your model captures something the crowd doesn't, your TC and MMC will be higher, and those metrics now matter more.

If you were planning to copy someone's signal: Don't. The similarity penalty will wipe out your reward, and the higher TC/MMC weights mean the copied signal scores lower anyway.

If you're running multiple models on the same account: Same-user models are exempt from the similarity penalty. You can still submit 3 different approaches per round without worrying about self-penalization.

Technical Details

The similarity matrix is O(n^2) in the number of submissions, computed after round close and before payout calculation. For 1,000 participants with 489 stocks each, this is ~500,000 pairwise correlations — about 2 seconds on a single thread.

The threshold (85%) and penalty mode (soft vs hard) are configurable per tournament. Soft mode is default — it applies a graduated penalty rather than a binary cutoff. This handles the gray area where two contributors independently arrive at similar (but not identical) signals.

The full scoring pipeline is now:

1. Compute IC, TC, MMC per submission
2. Compute pairwise similarity matrix
3. Apply diversity penalty → adjusted scores
4. Rank by adjusted score
5. Compute raw payouts (adjusted_score × stake × multiplier)
6. Normalize against reward pool (scale factor)
7. Generate merkle tree for on-chain claims

The Broader Principle

Prediction markets and signal aggregation platforms live or die by the quality of their incentive design. A system that rewards capital over insight devolves into a whale game. A system that can be sybil-attacked becomes a race to create wallets.

SignalNet's value proposition is simple: the best signals win. The scoring engine needs to enforce that. With similarity penalties and uniqueness-weighted scoring, we think it does.

Build something original. That's all we ask.


These changes are live for the Genesis Round. See the Payout System docs for the full algorithm.