The FDTA, properly implemented, will allow regulators to see both the forest and the trees.
Executive Summary
Senator Mike Crapo (R-ID) described the aims of the Financial Data Transparency Act (FDTA), in a 2022 announcement as “Making financial data used by federal regulators more accessible and understandable to the American public is an important step in improving government transparency and accountability.” The FDTA, jointly introduced by Senators Crapo and Warner (D-VA), represents a real opportunity to meet these goals.
Implementing the right data standard, as called for in the FDTA, will enable economies of scale, reduce the cost of reporting, data collection and analysis, and generate good quality, actionable data for policy-setters, regulators, and the public, including investors, and researchers.
Standards like UPCs, QR codes and shipping containers, take an existing process or task, and improve its efficiency and effectiveness. Shipping containers, for example, have a standard, engineered structure and design, that optimizes the transport process – enabling automation, economies of scale, increased delivery speed and inventory fidelity (less theft and breakage). UPCs and QR codes track inventory and can take audiences to destinations without exposing details that might cause confusion.
Data standards have a similar purpose and impact. They take the guesswork out of communicating and transporting information which improves data reliability. They reduce human involvement in data processing and enable economies of scale through automation. Open data standards leverage a competitive marketplace of software tools, lowering the cost of reporting, data collection and analysis.
Where we are today
FDTA agencies maintain over 400 data collections from thousands of reporting entities in multiple formats including PDF, text, HTML, custom XML and XBRL. The current state of data processing and management among the agencies that fall under the FDTA negatively impacts reporting entities, regulators, and other data users. Users of data have limited access to machine-readable, interoperable data. Disclosure requirements are often fragmented and ambiguous. Data cannot be easily located, inventoried, or stored. Entity and securities identifiers are not consistently applied which makes it nearly impossible to effectively evaluate business and investment risk.
Reporting entities face significant duplication in reporting, and confusion in contending with numerous forms. Both reporting entities and data users are faced with lengthy technical documentation on how to report and use data, with no linkage between the data reported and the semantic data model. Today’s approach has evolved over time, with each agency laser-focused on their own reporting needs.
Not surprisingly, this has led to a highly siloed approach to data management which causes many of the problems outlined above. If regulators truly wish to reduce reporting burden, enable economies of scale, and encourage more timely, transparent reporting, they must coordinate efforts and work together.
What success looks like
As regulators work toward the plan to roll out the FDTA, it is critical to keep in mind what constitutes success: reliable, unambiguously machine-readable, interoperable data, a reduction in reporting burden and cost across all stakeholders, and adaptability to changes in reporting needs and technology over time.
Hundreds of effective data standards programs have been rolled out by regulators worldwide. U.S. based programs by the Federal Deposit Insurance Corporation (FDIC), and the Securities and Exchange Commission (SEC) were launched 18 and 15 years ago, respectively, and continue to expand because of their success. The Federal Energy Regulatory Commission (FERC) initiated its first data standards program in 2021 and is working to expand on that program as well.
How to get there
The roadmap to effective data standards that will meet the letter and the spirit of the FDTA is already tested and proven in hundreds of programs worldwide. Agencies that fall under the FDTA have a clear path to follow:
Step 1: Build taxonomies (digital dictionaries) that unambiguously describe each data collection: reported facts, relationships between facts.
Step 2: Review data collections to eliminate duplicates and consolidate reporting needs.
Step 3: Consolidate reporting across all FDTA agencies to (again) eliminate duplicates and reduce reporting burden.
Step 4: Educate agencies, reporting entities, and intermediaries that support a robust, competitive reporting infrastructure.
The FDTA concretely stipulates data standards that “render data fully searchable and machine-readable,” and that “enable high quality data through schemas, with accompanying meta-data documented in machine-readable taxonomy or ontology models.” Nevertheless, alternatives to data standards are likely to be considered. Options such as spreadsheets, custom XML schemas, and artificial intelligence may be contemplated.
While these approaches may be considered easier to implement, regulators must carefully consider not only the requirements of the legislation itself, but more importantly, the short- and long-term impact of each approach considered. These alternatives will not meet the requirements, nor will they meet the goals of the FDTA. Open data standards will.
This paper explores the current state of data management among agencies and provides a roadmap to meet the achievable goals laid out in the FDTA. The ability to link reporting requirements across agencies through universal data standards will give regulators, for the first time, a holistic view of regulated entities. The FDTA, properly implemented, will allow regulators to see both the forest and the trees.