Research Principles

Nine principles govern every decision in this research and commentary system. Three are organizational commitments that govern how the system is built, operated, and communicated. Six were discovered through empirical testing — developed using datasets spanning 26 years of market history across 15,734 securities, including 7,000+ delisted tickers, as part of an ongoing research program. Neither category constitutes investment advice; both are published for educational and research purposes.

Organizational principles come first — they define who decides, how decisions are recorded, and how the research process works. A data-proven principle earns its place only when it meets two criteria: it was tested against the dataset under the system's constraints, and the finding changed the current published methodology.

The following are organizational commitments that govern how the system is built, maintained, and communicated. Each is verifiable by examining the system's process documentation.

1. The Human Decides

Multiple AI systems participate in research, development, and analysis. One human — the Chief Human Agent — makes every capital allocation decision in the paper trading program. The go/no-go call on every scan result publication, every deployment, and every risk parameter is made by a person who bears the consequences.

This is consistent with the publisher's exclusion framework under Lowe v. SEC, which establishes a three-prong test: publications must be (1) impersonal — not tailored to individual circumstances, (2) bona fide — offering genuine analysis rather than a vehicle for trading, and (3) of regular circulation — available to the general public on a consistent schedule. MorningEdge's scan results are published simultaneously to all readers via Telegram and the website, are not personalized, and follow a documented daily schedule.

AI systems assist with research and analysis. They do not generate scan results. A human reviews every output before publication.

See: The Team

2. Every Decision Has a Paper Trail

Every change to the research methodology passes through a gated research process before being published. Each gate requires documented evidence: a hypothesis, a dataset, a methodology, a result, and a go/no-go decision. Changes that cannot trace their lineage through these gates are not incorporated into the published methodology. These gates are voluntarily adopted internal standards.

  • Daily scan results are published via Telegram as timestamped validation records before the market opens — there is no advance trading window
  • Every day's results — simulated wins, simulated losses, and paper account balance — are published on the results page the same day; these reflect a paper trading simulation, not a live brokerage account
  • Research methodology is described publicly; specific thresholds and detailed lab results are maintained as internal documentation per the data classification policy

3. Process Informs Product

The system is not designed top-down. Research findings drive development. The exit system was not architected — it was discovered through four iterations of testing. The regime filter was not hypothesized — it emerged from analyzing which market conditions allow gap stocks to hold their gains.

The research lab produces findings. Findings that pass gate review become principles. Principles inform the research system. The research system generates results. Results create auditable content. This cycle is continuous and documented.

See: How It Works

4. The Regime Filter Is the Real Risk Management

The system's primary risk control is not a stop-loss — it is the decision not to trade. A six-signal market regime filter evaluates conditions each morning and classifies the day as GREEN (trade), YELLOW (elevated caution), or RED (sit out). On the majority of trading days, the research system does not generate scan results at all.

Backtesting shows this is the single largest contributor to risk-adjusted performance in simulation. The same strategy, run on all days instead of only GREEN days, loses money in backtesting. The regime filter is the edge — not the entry signal, not the exit logic.

5. Slippage Is the Silent Killer

Execution cost is the gap between a backtest and a live trading account. Small differences in fill quality — measured in basis points — compound into material differences in annual performance. Above a quantified threshold, the strategy's edge disappears entirely in simulation.

Slippage assumptions in this system are sourced from published broker execution data and peer-reviewed market-impact models (Kissell1, Optimal Trading Strategies), not internal estimates. The system includes a voluntarily adopted suspension threshold: if measured slippage in paper trading consistently exceeds the modeled ceiling, scan result generation pauses pending review. This threshold is a design commitment, not a regulatory requirement.

6. Measure Everything, Then Simplify

No component is removed by intuition. Every configuration — exit timing, position sizing, slippage level, filter threshold — is tested individually against the research dataset before a decision is made. The sell-timing analysis evaluated 9 intraday checkpoints. The slippage sweep tested 11 execution-cost levels. The exit optimization evaluated 4 complete architectures.

This methodology is informed by the backtesting literature on false discovery and overfitting. Lopez de Prado2 (Advances in Financial Machine Learning, Ch.11–12) establishes that most published backtests are false discoveries from selection bias. Chan (Quantitative Trading, Ch.3) requires survivorship-bias-free data and in-sample/out-of-sample separation as prerequisites. Bailey et al. formalize the Probability of Backtest Overfitting metric. Harvey, Liu & Zhu (2016) demonstrate that a discovered factor needs a t-ratio above 3.0 to be considered significant, given the scale of data mining in financial research.

The extended dataset includes delisted securities that most commercial databases exclude. Independent lab studies measured the exact survivorship bias against the research dataset — not estimated, measured — and found it inverted: including delisted tickers improved results in simulation, contradicting the common assumption that survivorship bias inflates performance.

7. Quality Over Quantity

The scanner selects up to five positions per day, but it will select fewer if fewer meet the quality threshold. No position is added to fill a quota. Empirical analysis shows the lowest-quality selection in any set is disproportionately likely to be the day's worst performer in backtesting.

Correlated selections are deduplicated — no more than one position per sector theme — and the theme leader is determined by dollar volume, not share volume.

8. Complexity Destroys Alpha

Four complete exit frameworks were built and tested in simulation. The framework with the most rules performed worst in backtesting. The framework with the fewest rules performed best in backtesting. Each additional rule created an additional opportunity to exit a winning position prematurely.

The current published methodology uses three exit rules. This is not a design choice made in advance — it was discovered through iterative testing in simulation. The gap-and-go thesis is a directional bet on price discovery. Every exit rule that fires before the close is a bet against that thesis.

9. A Gap Without a Reason Is a Trap

A pre-market price gap accompanied by a verifiable catalyst (earnings, M&A, regulatory action) behaves differently from a gap with no identifiable cause. Catalyst-backed gaps tend to hold or extend through the trading day in backtesting. Gaps without catalysts tend to revert to the prior close.

The scanner classifies catalysts by source and type, using multiple independent data sources for cross-validation. Single-source intelligence has documented blind spots — multi-source verification reduces false positives.

The six principles above reflect findings from the ongoing research program. They describe what the historical data showed under documented conditions. They do not describe what will happen in future market conditions, which are unknown.

Important Notice

MorningEdge publishes general financial research and commentary. Nothing on this site constitutes personalized investment advice, a recommendation to buy or sell any security, or an offer to manage money. All scan results are published simultaneously to all readers; no individual receives results before or after any other.

All performance figures presented on this site, including daily results, cumulative returns, and account balances, reflect a paper trading simulation using virtual capital. No real money is at risk. Past simulated performance — whether from backtesting or paper trading — does not predict future results under live market conditions.

The research principles, quality gates, and operational standards described on this page are voluntarily adopted internal practices. They are not certifications, regulatory designations, or guarantees of any outcome. They describe how the research is conducted, not what it will produce.