The algorithmic trading industry has a transparency problem, and Washington has noticed.
Over 70% of U.S. equity trades are now algorithmically driven , according to FINRA (2025)1. Most of those systems are black boxes — proprietary models that take in data and produce orders with no obligation to explain the logic in between. For years, regulators tolerated this because the markets functioned. Then came the flash crashes, the AI-washing scandals, and a growing realization that "it works" is not the same as "we can explain why."
The rules governing AI in finance are being written right now. Not someday. Right now. And the firms that have already built for transparency will have a structural advantage over those scrambling to retrofit it.
This post is about who's writing those rules, what they're likely to require, and what we've been building at MorningEdge — not because the rules demand it yet, but because we believe auditability is a feature, not a compliance cost.
The Black Box Problem
Here's the core tension: algorithmic trading systems make decisions faster than any human can review, using models that are often opaque even to their creators. When those systems work, nobody asks questions. When they fail, everyone asks the same question: How did this happen, and why couldn't anyone see it?
The SEC has started answering that question with enforcement actions. In March 2024, they brought their first "AI-washing" cases — Delphia Inc. was fined $225,000 and Global Predictions Inc. was fined $175,000 for making false claims about their AI capabilities, as detailed in the SEC (2024)2 enforcement announcement. Delphia told clients it "uses machine learning to analyze the collective data shared" to manage portfolios. When SEC staff asked to see the algorithm, the company admitted it didn't exist. Worse, after telling the SEC it would correct its disclosures, Delphia made additional false claims via email, social media, and press releases, as Rosen-Zvi (2024)3 documented.
Global Predictions marketed itself as the "first regulated AI financial advisor" and promoted "AI-driven forecasts" — neither of which actually existed.
These aren't hypotheticals. They're the cases that are shaping how regulators think about algorithmic transparency. And the SEC appears to be applying enforcement strategies similar to its ESG greenwashing crackdown — examining disclosures across filings, client communications, social media, and websites.
Who's Leading Policy
SEC Chair Paul Atkins4 gave his first dedicated AI address on March 4, 2026, at the Financial Stability Oversight Council's AI Innovation Series Roundtable. His full remarks are published in Atkins (2026)4. His approach is worth understanding because it's nuanced — pro-innovation, but not anti-oversight.
Atkins favors applying existing disclosure rules to AI rather than creating new AI-specific regulations. He explicitly rejected "prescriptive mandates and disclosure checklists," as Goodwin (2026)5 summarized. His framework is principles-based: if your AI use is material to investors, you need to disclose it. The standard he cited is "whether there is a substantial likelihood that a reasonable shareholder would consider the information important in making an investment decision."
He's also developing what he called an "innovation exemption" — a "cabined, time-limited, transparent, flexible" sandbox where firms can experiment with AI under structured oversight, "focused on investor protection," as reported by FedScoop (2026)6. But he drew a clear line on human judgment: "Algorithmic detection of possible misconduct should not and cannot supplant the considered judgment of our commissioners and staff, nor can it serve as the sole basis of an SEC enforcement action."
Translation: the SEC wants to see what you're building, wants you to be able to explain it, and wants a human accountable for the decisions.
In August 2025, Atkins created an internal AI Task Force to deploy AI across the agency itself — for risk assessments, fraud detection, disclosure review, and market-wide risk evaluation. The SEC is using AI to oversee AI.
The SEC's 2026 examination priorities expanded AI oversight, with examiners now reviewing AI capability claims for accuracy and assessing firms' policies and procedures for AI supervision — particularly in trading and anti-money laundering, per the SEC (2025)7 examination priorities.
The Investor Advisory Committee voted in December 2025 to recommend AI disclosure guidance, as Crowell (2025)8 analyzed. Their findings are striking: only 40% of S&P 500 companies provide any AI-related disclosures. Just 15% disclose board oversight of AI. Yet 60% of S&P 500 companies view AI as a material risk. The committee recommended a three-part framework: define what you mean by "AI," disclose board oversight mechanisms, and report material impacts on both internal operations and consumer-facing products.
FINRA1 published its 2026 Annual Regulatory Oversight Report in December 2025 with a standalone section on generative AI — and, for the first time, a discussion of agentic AI. This is significant for us because MorningEdge runs a team of AI agents with defined roles and governance.
FINRA defines AI agents as "systems or programs that are capable of autonomously performing and completing tasks on behalf of a user," as Debevoise (2025)9 reported. Their concerns are specific: autonomy without human validation, scope creep beyond intended authority, and the fact that "multi-step reasoning or complex chains of agent actions may be difficult to reconstruct, complicating auditability." Their governance expectations include monitoring agent system access, determining where human-in-the-loop oversight is required, tracking agent actions and decisions, and establishing guardrails constraining agent behavior.
They also flagged "Shadow AI" — unapproved AI tools used by employees outside governance frameworks. It's a real problem. The compliance system you don't know about is the one that will fail you.
The CFTC has taken a different approach. A staff advisory in December 2024 reminded registered entities that existing Commodity Exchange Act obligations already apply to AI — no new rules, just "follow the rules you already have," per the CFTC (2024)10 staff advisory. Their Technology Advisory Committee published a report in May 2024 defining responsible AI through five properties: fairness, robustness, transparency, explainability, and privacy, as the CFTC TAC (2024)11 report detailed. The report identified specific risks for financial markets: concentration risk from limited foundation model providers, flash crash potential from erroneous AI output, and the inability to demonstrate fiduciary duties when the model is a black box.
Their long-proposed Regulation AT (Algorithmic Trading) — which would mandate risk controls like maximum order sizes and development/testing/monitoring standards — has never been finalized. But it remains on the table.
What's Taking Shape in Congress
The bipartisan "Unleashing AI Innovation in Financial Services Act" (S.2528 / H.R.4801) would direct the SEC, Fed, CFPB, and other agencies to create in-house AI Innovation Labs, as Hill et al. (2025)12 proposed. The bill has broad support — Chairman French Hill (R-AR), Josh Gottheimer12 (D-NJ), Senator Mike Rounds12 (R-SD), and Senators from both parties co-sponsored it. The labs would allow AI testing "without unnecessary or unduly burdensome regulation or expectation of enforcement actions," with annual reports to Congress.
The House Financial Services Committee held hearings titled "Unlocking the Next Generation of AI in the U.S. Financial System for Consumers, Businesses, and Competitiveness." The Treasury Department is pushing for "robust but gradual" AI adoption in banking, considering sandboxes and public-private partnerships, as FedScoop (2025)13 reported.
The legislative direction isn't "ban AI in finance." It's "let innovation happen, but build the guardrails." The question for every algorithmic trading platform is: Are your guardrails already in place, or will you be building them under regulatory pressure?
The EU Contrast
For context on where this could go: the EU AI Act classifies algorithmic trading, credit scoring, and robo-advisors as high-risk AI systems, per the European Commission (2024)14 regulatory framework. Full compliance is required by August 2026. Fines run up to 7% of global annual turnover or EUR 35 million.
The U.S. approach is principles-based and sector-specific. The EU's is comprehensive and legislative, as Shearman (2025)15 compared. But both point in the same direction: explainability, auditability, human oversight.
We don't operate in the EU. But we're voluntarily building to that standard, because we believe it's where the U.S. will eventually land — and because it's the right way to build a system that handles money.
The RegTech Opportunity
Here's the part most fintech companies miss: compliance infrastructure isn't a cost center. It's a product category.
The global regtech market reached $18.6 billion in 2025 and is projected to grow to $77 billion by 2034 — a compound annual growth rate of 17.1%, according to IMARC (2025)16. North America holds over 41% of the market. The drivers are straightforward: regulatory complexity is increasing, financial fraud is rising, and manual compliance doesn't scale.
Most regtech companies sell tools to help other firms comply with regulations — transaction monitoring, KYC automation, reporting platforms. But there's a different model: building compliance infrastructure into your own trading system from day one, then offering that infrastructure as a reference architecture or service.
That's what we're doing at MorningEdge.
This is not a product yet — it is a reference architecture, being built and validated in public. If it proves out over 90 days, the infrastructure becomes transferable. Not because we set out to build a regtech company, but because the infrastructure we needed to trade responsibly is regtech.
What We've Built (Before the Rules Required It)
Every pick MorningEdge makes goes through a gated research flywheel — seven validation gates between hypothesis and production. No research finding reaches our trading system without passing through lab design, in-sample testing, out-of-sample validation, code review, and deployment verification. Every gate is logged. This directly addresses FINRA's expectation that firms maintain "prompt and output logs for accountability" and the CFTC's call for transparency in AI deployment.
Our audit ledger is hash-chained. Each entry contains the cryptographic hash of the previous entry, creating a tamper-evident record of every research decision, code change, and corrective action. If someone altered a past record, the chain would break — visibly, immediately, permanently. This is the kind of auditability infrastructure that regulators are signaling they want to see.
We apply Lopez de Prado's17 Deflated Sharpe Ratio to correct for multiple testing bias in backtests. When we run 50 experiments and one shows a promising Sharpe ratio, we adjust for the fact that we ran 49 others. This is the statistical equivalent of showing your work — and it's exactly the kind of methodology the CFTC's responsible AI framework demands under "robustness" and "explainability."
Our Corrective Action Register tracks every identified issue, its root cause, and the resolution. It's modeled on ISO 9001 quality management practices. Every corrective action links back to the audit trail.
We operate under the SEC's Publisher's Exclusion — we publish research and analysis, we don't manage money. Our conflict mitigation is explicit: no front-running, AI governance policies (zero strategy data sent to cloud APIs), and a human-in-the-loop architecture where every production decision flows through our CHA (Chief Human Agent). FINRA's 2026 report specifically recommends "determining where human-in-the-loop oversight is required" for firms using AI agents — we built that from day one.
We run a team of AI agents with defined roles, HR policies, and governance constraints — exactly the kind of framework FINRA is now asking firms to develop. Each agent has explicit scope boundaries, output logging, and human oversight at every decision point. No agent can modify production trading code or strategy parameters without CHA approval.
We don't do any of this because current regulations require it for our use case. We do it because we believe the firms that build for transparency now will be the ones that thrive when transparency becomes mandatory.
What Comes Next
The regulatory landscape for algorithmic trading and AI in finance is genuinely in flux. Chair Atkins withdrew the Biden-era proposed rules on AI conflicts of interest. New principles-based guidance is forming but not finalized. Congress is writing legislation that may or may not pass. FINRA is flagging agentic AI risks for the first time.
What's not in flux is the direction. Every signal from every regulator points the same way: explain what your system does, prove it does what you claim, keep records that can be audited, and make sure a human is accountable.
The black box era isn't ending because regulators decided to kill innovation. It's ending because trust requires transparency, and markets require trust.
We've written more about our compliance infrastructure — the gated flywheel, the hash-chained audit ledger, the corrective action register, and the full governance framework — on our Built to Be Audited page. If this post is about the why, that page is about the how.
References
- FINRA (2025). 2026 Annual Regulatory Oversight Report — Generative Artificial Intelligence. ↩
- SEC (2024). SEC Charges Two Investment Advisers for AI-Washing. ↩
- Rosen-Zvi (2024). Decoding the SEC First AI-Washing Enforcement Actions. ↩
- Atkins (2026). Remarks at the FSOC AI Innovation Series Roundtable. ↩
- Goodwin (2026). SEC Chairman Atkins on AI: Strategy, Governance, and Principles. ↩
- FedScoop (2026). SEC Atkins Floats AI Sandboxes for Financial Firms. ↩
- SEC (2025). 2026 Examination Priorities. ↩
- Crowell (2025). Investor Advisory Committee Recommends SEC AI Disclosure Guidelines. ↩
- Debevoise (2025). FINRA 2026 Regulatory Oversight Report: Focus on Generative AI and Agent-Based Risks. ↩
- CFTC (2024). CFTC Staff Advisory on the Use of Artificial Intelligence. ↩
- CFTC TAC (2024). CFTC Technology Advisory Committee Report on Responsible AI. ↩
- Hill, Gottheimer & Rounds (2025). Unleashing AI Innovation in Financial Services Act. ↩
- FedScoop (2025). Treasury Department Weighs AI Adoption in Banking. ↩
- European Commission (2024). Regulatory Framework on AI — Shaping Europe Digital Future. ↩
- Shearman (2025). AI Under Financial Regulations in the US, EU and UK. ↩
- IMARC (2025). RegTech Market Report 2025-2034. ↩
- Marcos Lopez de Prado (2018). Advances in Financial Machine Learning. ↩
Things to Explore
Interested in our research, validation methodology, and trading system?
Get in TouchWork With Diana
Need a context architect to scaffold your AI agents and facilitate structured learning?
Visit goddev.aiThis post is part of a series documenting MorningEdge's development in real time. The knowledge base contains 0 books, papers, and lab reports totaling 0+ searchable chunks. The trading system described is paper trading only — no real capital is at risk.