AI Governance Lead (Wisconsin)
Position Overview
Carex is partnering with a leading financial services organization to identify an experienced AI Governance Lead to help advance the maturity of the organization’s Artificial Intelligence (AI) program. This role plays a critical part in shaping and operationalizing responsible AI practices across the enterprise—balancing innovation, risk management, and regulatory compliance. Candidates must be located in Wisconsin for consideration.
The AI Governance Lead will design, implement, and oversee policies, frameworks, and best practices to ensure AI systems are developed and deployed ethically, transparently, and in alignment with organizational values and evolving regulatory expectations. This position collaborates closely with Data, Technology, Legal, Risk, Compliance, and Business teams to embed AI governance into day-to-day AI/ML development and delivery.
Key Responsibilities
AI Governance & Oversight
-
Partner with cross-functional teams—including data scientists, engineers, legal, risk, compliance, data governance, and ethics teams—to embed AI governance best practices throughout the AI/ML lifecycle.
-
Establish and maintain processes for AI risk assessment, bias detection, transparency, explainability, and accountability.
-
Oversee AI system deployment and lifecycle management, ensuring responsible use from design through post-production monitoring.
-
Conduct regular audits and assessments of AI systems to ensure adherence to ethical principles, internal standards, and regulatory requirements.
-
Identify, assess, and mitigate model bias, fairness risks, and unintended consequences during development and after deployment.
-
Monitor AI system performance, reliability, and ethical risks throughout implementation and operation.
Monitoring, Reporting & Continuous Improvement
-
Develop dashboards, metrics, and reporting mechanisms to track AI governance compliance and risk posture.
-
Implement ongoing evaluation frameworks and tools to continuously assess the effectiveness and compliance of AI solutions.
-
Document monitoring results, audit findings, and remediation actions; maintain a centralized knowledge base.
-
Regularly report findings, trends, and recommendations to stakeholders and leadership.
Collaboration & Enablement
-
Collaborate with technical and product teams to ensure responsible delivery of AI-enabled solutions that unlock business value while managing risk.
-
Translate complex AI and governance concepts into clear, actionable guidance for technical and non-technical audiences.
-
Serve as a trusted advisor to business and technology leaders on responsible AI practices and emerging risks.
Required Qualifications
-
Bachelor’s degree in Computer Science, Analytics, Economics, Law, or a related field
OR an equivalent combination of education and relevant professional experience. -
7+ years of experience in data, technology, or analytics roles with a focus on AI governance, data governance, regulatory compliance, risk management, or related disciplines.
-
3+ years of hands-on experience in data science, machine learning, or technology development.
-
Strong understanding of AI/ML concepts with the ability to actively engage in technical discussions.
-
Knowledge of AI regulations, ethical AI principles, AI standards, and risk management frameworks.
-
Strong analytical and problem-solving skills, with the ability to break complex issues into actionable components.
-
Excellent verbal and written communication skills, including experience managing and influencing diverse stakeholders.
-
Proven ability to work effectively in cross-functional environments with legal, compliance, engineering, data, and business teams.
-
Authorization to work in the United States without sponsorship.
Preferred Qualifications
-
Experience with AI auditing tools, model monitoring platforms, or bias detection methodologies.
-
Experience supporting AI initiatives within highly regulated industries such as financial services or insurance.
#LI-AS1
