MandateCore (AI Decision Governance Engine)
A real-time AI governance engine that evaluates whether AI-influenced decisions should execute based on policy rules, authority levels, and required evidence with full audit traceability.
I design AI governance and data platform prototypes that illustrate how automated systems can be reviewed, guided, and evaluated. This portfolio highlights applied concepts, interactive demos, and lightweight case studies.
A real-time AI governance engine that evaluates whether AI-influenced decisions should execute based on policy rules, authority levels, and required evidence with full audit traceability.
A lightweight AI governance diagnostic and maturity assessment tool designed to evaluate how prepared an organization is to responsibly deploy, manage, and monitor AI systems.
A portfolio prototype illustrating HR reporting, lineage concepts, weighted metrics, forecasting, and reporting outputs.
An interactive prototype showing how planning, visibility, and workflow coordination can be brought together in a modern supply chain experience.
A mock Large Concept Model prototype that demonstrates rule-based concept tagging and taxonomy-driven semantic organization across enterprise data.
A framework showing how LLM outputs can be evaluated against governance criteria such as bias, safety, relevance, and instruction adherence.
An interactive demonstration of how vector databases and semantic AI can align terminology across Legal, Restaurant Development, and Finance for cross-functional collaboration.
Hosted demos will be added here as they are published.
Click below to open the current resume PDF.
Download resume (PDF)Recruiters and hiring managers: email is best. LinkedIn also works.
Overview:
MandateCore is a real-time governance engine designed to control how AI-driven decisions are executed in regulated environments such as banking and finance.
Problem:
AI systems can generate decisions in seconds, but governance processes remain manual, fragmented, and slow. This creates risk when high-impact decisions (payments, approvals, fraud actions) are executed without structured validation against policy and authority.
Solution:
MandateCore introduces a runtime decision validation layer that evaluates each AI-influenced action against policy rules, authority thresholds, and required evidence before execution.
Decision Model:
Each decision is evaluated and returns one of three outcomes:
ALLOW – decision meets all conditions
ESCALATE – requires human review
REFUSE – violates policy or lacks required evidence
Technical Design:
Built using Python and Streamlit with a modular architecture that separates policy definitions (YAML), evaluation logic, authority validation, and audit logging. The system simulates real-world decision scenarios including payments, fraud detection, and approvals.
Key Capabilities Demonstrated:
Real-time AI governance, decision validation engines, policy-as-code, authority-based controls, explainable outputs, and audit traceability.
Outcome:
MandateCore demonstrates how organizations can shift from after-the-fact governance to real-time control of AI-driven decisions, reducing risk while enabling faster, safer automation.
Problem:
Organizations are rapidly adopting AI systems without clear governance frameworks. Leadership teams often lack visibility into how AI models are managed, whether sensitive data is properly controlled, and whether monitoring and risk management practices exist. Governance assessments are often manual, slow, and interview-driven.
Solution:
Developed an AI Governance Scorecard that combines a structured governance questionnaire with an automated database evaluator. The tool assesses governance maturity across policy oversight, risk management, monitoring, and data governance, then produces a maturity score with prioritized remediation recommendations.
Technical Design:
Built with Python and Streamlit using a modular design that separates the interface, scoring engine, database connectors, and evaluation logic. The evaluator simulates database analysis by inspecting schema metadata, table structures, column patterns, and audit fields to infer governance signals that feed a weighted scoring model.
Key Capabilities Demonstrated:
AI governance maturity modeling, automated metadata analysis, governance scoring, responsible AI framework alignment, and rapid governance diagnostics for enterprise or consulting environments.
Outcomes:
The tool provides a repeatable way to quickly assess AI governance maturity and identify governance gaps. It helps organizations prioritize mitigation efforts, improve transparency, and align AI practices with frameworks such as NIST AI RMF and ISO 42001.
Overview:
This prototype shows how planning, visibility, and workflow coordination can support more informed supply chain decisions.
Problem:
Many organizations manage planning, visibility, and execution through separate tools, which can make coordination more difficult.
Solution:
The demo presents a simplified concept in which planning insights, status information, and workflow steps are brought together in one view.
Capabilities Demonstrated:
Scenario exploration, visibility dashboards, disruption awareness, and workflow coordination patterns.
Outcome:
The prototype illustrates how a more connected approach can improve clarity, responsiveness, and coordination across supply chain activities.
Problem:
Large enterprises face growing pressure to report workforce and sustainability metrics across multiple regions, but many struggle with fragmented HR systems, inconsistent definitions, weak lineage, and limited forecasting.
Solution:
Greenline simulates a modern enterprise data architecture that links raw HR inputs to governance, analytics, and regulatory reporting outputs. It demonstrates lineage, global sustainability metrics, scenario modeling, and forecasting.
Technical Design:
The architecture includes multi-region HR data ingestion, structured transformation layers, lineage tracking, regional aggregation across APAC, EMEA, LATAM, and North America, headcount-weighted metrics, Monte Carlo forecasting, and exportable compliance outputs.
Key Capabilities Demonstrated:
Global sustainability reporting, headcount-weighted aggregation, lineage visualization, Monte Carlo forecasting, scenario modeling, and reporting calendar management.
Outcome:
Greenline shows how organizations can move from fragmented HR reporting toward transparent, auditable sustainability analytics pipelines that support compliance and executive decision-making.
Overview:
Base Zero is an experimental prototype exploring a Large Concept Model that focuses on identifying and tagging conceptual meaning across structured and unstructured data.
Problem:
Enterprise AI systems often struggle with semantic consistency across departments. Common issues include inconsistent terminology, ambiguous business definitions, and difficulty aligning structured datasets with unstructured content.
Solution:
Base Zero introduces a concept-centric approach to interpretation. Instead of identifying only keywords, it groups related ideas under shared conceptual meaning using rule-based concept tagging and taxonomy-driven logic.
Technical Design:
The design includes a hierarchical taxonomy engine, domain-category-object classifications, cross-domain semantic relationships, rule-based tagging, metadata enrichment, and semantic alignment that supports downstream knowledge graphs.
Key Capabilities Demonstrated:
Rule-based concept tagging, cross-domain semantic alignment, taxonomy-driven metadata enrichment, and a foundation for knowledge graphs and vector search.
Outcome:
Base Zero demonstrates early architecture for a Large Concept Model that could improve metadata consistency, semantic search, and AI governance in large organizations.
Overview:
The AI Governance Evaluator is a framework for assessing LLM outputs using governance-oriented criteria such as safety, bias, relevance, and quality.
Problem:
Organizations deploying LLMs face governance risks including hallucinations, biased outputs, unsafe responses, and inconsistent quality, but often lack a systematic evaluation framework.
Solution:
The project introduces an evaluation pipeline that scores outputs across multiple governance dimensions to support responsible deployment and oversight.
Technical Design:
The system includes evaluation dimensions for relevance, factual accuracy, coherence, completeness, safety, bias detection, clarity, helpfulness, instruction adherence, and tone appropriateness. Scores are normalized and surfaced through a governance-oriented evaluation flow.
Key Capabilities Demonstrated:
Automated LLM evaluation pipelines, governance scoring frameworks, bias and safety detection, and operational AI governance design patterns.
Outcome:
The framework demonstrates how organizations can embed responsible AI evaluation directly into AI development workflows instead of relying only on policy documents.
Overview:
Semantic MDM demonstrates how vector databases and semantic reasoning can align terminology across multiple departments by mapping different vocabularies to shared meaning.
Problem:
Large organizations often use different language across departments such as Legal, Finance, and Operations, making enterprise-wide analytics and knowledge discovery difficult.
Solution:
The platform introduces a semantic alignment layer that connects related concepts across domains using embeddings and conceptual similarity while allowing each team to maintain its own terminology.
Technical Design:
The design includes vector embeddings for terms and definitions, concept clustering, a semantic alignment layer, shared conceptual references, cross-domain query support, and knowledge graph integration for explainable relationships.
Key Capabilities Demonstrated:
Semantic alignment across departments, vector-based concept discovery, knowledge graph construction, and cross-domain interoperability.
Outcome:
Semantic MDM shows how organizations can bridge departmental language differences and improve collaboration, analytics, and data interoperability across silos.