top of page
DAR_Photo11_edited.jpg

Dale Rutherford

Architect of Ethical AI Integration Frameworks  PhD Researcher in AI Governance, Risk & Information Quality
Designing lifecycle strategies that mitigate bias, misinformation, & errors (BME), preserve information quality, and ensure AI system integrity.
Creator of the BME Metric Suite and SymPrompt Framework—aligning AI deployments with global standards and ethical governance.
DAR_Photo10.png

About Me

I am a PhD Researcher in Information Science and Founder of The Center for Ethical AI, where I develop frameworks and metrics to guide the ethical integration of artificial intelligence across academic, enterprise, and government systems.

 

My doctoral dissertation, “Governing Echo Chamber Dynamics in LLMs,” investigates how large language models amplify bias, misinformation, and error (BME) through iterative feedback loops. I’ve created the BME Metric Suite and SymPrompt Framework to detect, measure, and mitigate these effects in real-world AI deployments.

 

With more than three decades of experience in business & systems strategy, data analytics, information quality, and operational excellence, I help clients design AI solutions that are not just innovative but also aligned with international standards and ethically sustainable.

 

I work at the intersection of research, strategy, and governance, designing tools and protocols that ensure AI systems advance human-centered outcomes, transparency, and long-term trust.

 

We are not merely decoding knowledge; we are dreaming the architecture of understanding.
— Luminara Incepta

Certifications

- ISO 8000 Master Data Quality Manager

- Lean Six Sigma Black Belt

- CITI Group 1

- IIBA Agile Business Analyst

- Agile Project Manager

View My Resume

My research explores how Large Language Models (LLMs) amplify bias, misinformation, and error (BME) through feedback loops, model drift, and human-in-the-loop cognitive echo chambers. These effects, while often subtle, can degrade the trustworthiness of AI systems and compromise decision-making, governance, and data quality at scale.

 

To counter these risks, I developed the BME Metric Suite, which includes:

  • Bias Amplification Rate (BAR)

  • Echo Chamber Propagation Index (ECPI)

  • Information Quality Decay (IQD)

  • Plus, emerging metrics such as PTDI and AHRS

 

These metrics provide a quantifiable lens for detecting, assessing, and governing the health of AI systems over time.

 

I also designed the SymPrompt Framework, a structured approach to ethical prompt engineering. It reduces bias drift, enhances alignment, and fosters transparency between human users and generative models, forming the foundation for a new paradigm of symbiotic prompting.

 

My work is anchored in internationally recognized standards, including:

  • ISO/IEC 42001 (AI Management)

  • ISO 8000 (Information Quality)

  • ISO/IEC 27001/27701 (Security & Privacy)

  • ISO 23053

  • NIST AI Risk Management Framework

​

My goal is simple but urgent: to equip organizations with practical, rigorous tools for navigating the ethical integration and lifecycle governance of AI systems, before the cost of inaction becomes irreversible.

bottom of page