What if your accounting estimates could stress-test themselves?
Join us to explore how multi-agent AI systems powered by Large Language Models could change how estimates are prepared and refine and improve the accuracy of accounting estimates. Agents can systematically test various modifiers through an iterative model dynamically simulating "what could go wrong" scenarios. Agents can probe internal and external variables that could throw off estimates, then iteratively refine their outputs until the figures hold up under scrutiny.
Attend this webinar to see how a structured, AI-driven approach can systematically surface irregularities that traditional review processes are likely to miss, improving both the accuracy and consistency of your financial reporting.
In this session, you'll learn:
- How multi-agent LLM systems can be designed specifically for financial estimation workflows
- Why iterative, scenario-based testing leads to more defensible accounting estimates
- How to incorporate exogenous and endogenous risk factors into your AI-powered review process
- What this means for the future of audit readiness and financial compliance
Whether you're in finance, accounting, audit, or technology leadership, this webinar offers a practical look at where AI-driven financial intelligence is headed — and how to get ahead of it.
Reserve your spot. Your estimates deserve a smarter review.
Speakers:
- Arion Cheong, Assistant Professor, School of Business, Stevens Institute of Technology
- Campbell Pryde, President and CEO, XBRL US
- Steve Yang, Associate Professor; Director, CRAFT at Stevens Institute of Technology
This event is free to attend, with an option to participate and earn 1.0 CPE in NASBA’s Accounting Field of Study for $49 ($39 XBRL US Members) – look for details in the registration confirmation.
Prerequisites: None
Program Knowledge Level: Basic
Advance Preparation: None
Program Format: Group Internet Based

