How It Works

MorningEdge is an independent publishing and research operation. The platform studies publicly available market data and publishes observations for general educational and informational purposes. All trading activity described on this site is conducted in a paper trading (simulated) account. No real capital is at risk. This page describes the research process and the organizational structure behind it.

A Note on Paper Trading

All results presented on this site are generated from paper trading — simulated execution against live market data using a brokerage sandbox environment. No real money is at risk in any position described here.

Paper trading results differ from live trading in important ways. Simulated fills do not experience the same liquidity constraints, market impact, or partial fill conditions that occur with real capital. Slippage models are applied in backtesting, but they are estimates, not measurements from live execution. Past simulated performance is not indicative of future results, whether simulated or live.

If and when MorningEdge transitions any component of its methodology to live capital, that transition will be explicitly disclosed on this site with a clear effective date. Until such disclosure is made, all references to trading activity, positions, and performance should be understood as paper trading results.

Conflicts of Interest

The following disclosures are voluntarily provided and self-assessed:

  • Paper positions — The operator may hold simulated (paper) positions in securities discussed on this site. Active paper positions are disclosed in the daily trading diary. No real-money positions are held in any securities covered by MorningEdge scan results.
  • No payment for coverage — No company, fund, or individual has paid or provided compensation of any kind for coverage or mention on this site. Scan results are generated algorithmically from publicly available market data.
  • No short positions — The system does not take short positions, simulated or otherwise. All scan results are long-only.
  • AI and data vendors — MorningEdge uses commercial AI services (Anthropic, Google) and market data providers (Alpaca, FirstRate Data) in its research and operations. These are standard vendor relationships. No vendor has editorial influence over published content or scan methodology.
  • No affiliate relationships — Links to brokerages, data providers, or other services on this site are not affiliate links. MorningEdge receives no referral compensation.

The Research Flywheel

Development follows a repeating cycle. Each stage has defined entry criteria, and work does not advance to the next stage without satisfying them.

  1. Observation — A pattern, anomaly, or question is identified from paper trading data, diary entries, or literature review.
  2. Literature Review — The knowledge base (80+ trading and financial ML texts) is searched for prior research. External sources are consulted only after internal sources are exhausted.
  3. Lab Study — A formal experiment is designed with a stated hypothesis, dataset specification, and methodology. The study runs against historical data with documented in-sample/out-of-sample separation.
  4. Gate Review — Results are reviewed against pre-defined acceptance criteria. The Chief Human Agent makes the go/no-go decision. Negative results are documented with the same rigor as positive ones.
  5. Paper Trading Validation — Approved findings are implemented in the paper trading system. Changes are validated in simulated execution against live market data before being incorporated into the published methodology.
  6. Monitoring — Paper trading performance is monitored against backtest expectations. Deviations trigger review. The system includes automatic suspension thresholds for execution quality degradation in the simulated environment.
G1 G2 G3 G4 G5 G6 G7 G8
Vetted ObservationTeam 1: DiaryA pattern noticed in live trading or diary review
Literature ReviewTeam 2: ResearchSearch Knowledge Base of 0 books/papers
Lab DesignTeam 3Protocol & hypothesis before execution
Lab AnalysisTeam 4Execute study, analyze results
Gate ReviewCHA DecisionHuman approval required before any code change
Test & VerifyTeam 5Code change with unit tests and audit trail
ValidationTeam 6: Paper TradingLive market confirms performance
Deploy ApprovalCHA Deploy GateNo code reaches production without CHA authorization
8 GATES
No shortcuts. No exceptions.
Active observation moving through the cycle
New observation spawned from validation findings
Gate checkpoint — must pass before advancing

This cycle runs continuously. Each iteration produces documentation that feeds the next cycle's observation stage.

Development Tracks

Work is organized into six parallel tracks, each with its own research agenda and gate requirements:

  • Scanner & DataPre-market gap identification, data quality, and multi-source intelligence validation
  • Exit & Risk Management — Research into position sizing, exit logic, catastrophic floor design, and regime filtering methodologies
  • ML & Prediction — Research into machine learning methodologies for historical gap outcome classification and position sizing
  • Platform & Automation — Scheduling, execution infrastructure, monitoring, and fail-safes
  • Web & Transparency — Public diary, compliance documentation, results publishing, and content
  • Research & Validation — Lab studies, backtesting methodology, dataset integrity, and ongoing validation

The Team

The system is built and operated by a human-AI research team. The Chief Human Agent retains final authority on all strategy decisions, capital allocation, and deployments. AI systems assist with research, development, and analysis under defined role boundaries. All publishing decisions are made by the human operator. AI systems do not independently determine what content is published.

No AI system has autonomous authority to execute trades, modify production parameters, or publish content without human approval. Role assignments and access controls are documented in the internal compliance manual.

See: The Team for the full team structure.

Dataset Integrity

Two tiers of historical data feed every backtest:

  • Tier 1 — Professionally sourced daily + intraday data covering 15,734 securities (active and delisted) across 26 years, including terminal delisting returns that most commercial datasets omit (Shumway, 1997).
  • Tier 2 — Live market data from brokerage APIs for paper trading validation against real-time conditions.

Every backtest enforces six bias controls drawn from the quantitative finance literature:

  1. Survivorship bias — [morningedge:data_delisted] delisted securities included with full price history through final trading day, covering bankruptcies, mergers, and involuntary delistings.
  2. Look-ahead bias — Point-in-time data only. No future information enters any signal calculation. Pre-market data is timestamped and frozen before the opening bell.
  3. Selection bias — The Deflated Sharpe Ratio corrects for the number of configurations tested during strategy development (Harvey, Liu & Zhu, 2016).
  4. Overfitting — Strict in-sample/out-of-sample separation with combinatorial purged cross-validation. No parameter reaches the published methodology without out-of-sample confirmation.
  5. Transaction costs — Broker-specific cost models using published fee schedules. Backtests never assume zero-cost execution.
  6. Storytelling — Hypotheses are pre-registered in lab reports before data is queried. Negative results are documented and published with the same rigor as positive findings (DB, 2014).

Data sources are professional-quality, split-adjusted, and include pre/post-market hours. The dataset methodology, including specific providers, coverage periods, and known limitations, is documented in internal technical specifications.

Compliance Framework

MorningEdge operates under the publisher's exclusion (Section 202(a)(11)(D) of the Investment Advisers Act of 1940, affirmed in Lowe v. SEC, 472 U.S. 181 (1985)). Under the three-prong test established in Lowe, the publisher's exclusion applies when publications are (1) regular and of general circulation, (2) not tailored to individual subscriber circumstances, and (3) impersonal in nature — that is, the publisher does not provide individualized advice based on a specific person's financial situation. MorningEdge's published scan results and research observations satisfy all three prongs: they are published on a regular schedule to all readers simultaneously, they are not customized to any individual's portfolio or risk tolerance, and no personalized advisory relationship exists with any reader.

MorningEdge adopts compliance practices voluntarily because transparency strengthens research quality — not as a claim of regulatory compliance equivalence with registered investment advisers or broker-dealers. The internal compliance manual documents data classification (four-tier: Public, Internal, Confidential, Restricted), access controls, record retention (10-year minimum for trading records), and AI vendor risk assessments. These practices are self-assessed and voluntarily adopted — they are not third-party certified or required by regulation for a publisher. Conflicts of interest are disclosed in the dedicated section above.

See: Compliance Infrastructure for the full compliance infrastructure documentation.