As a PhD scholar in the humanities or social sciences, you are an expert in critical thinking, ethics, and the complexity of human society. You might feel that the current “AI boom” is a party you weren’t invited to. After all, isn’t AI just for engineers and data scientists?

Absolutely not.

We are entering a new phase of the AI revolution. The initial rush to “build it fast” is being replaced by a desperate need to “build it right.” Companies are facing a crisis of trust—from algorithmic bias and “hallucinations” to data privacy scandals. They are realizing that coding skills alone cannot solve these problems.

They need experts who understand values, context, and consequences. They need AI Ethicists, Policy Researchers, and Trust & Safety Strategists.

Your PhD training didn’t just prepare you to write a thesis; it prepared you to be the conscience of the machine.


The “Trust Gap” in AI: Where Your Skills Are Needed

Engineers are trained to answer: “Can we build this?” You are trained to answer: “Should we build this? And if we do, who will it harm?”

This is the “Trust Gap,” and bridging it is a multi-billion dollar business imperative.

1. Identifying Bias & Inequity (Your Critical Theory Skill)

  • The Problem: An AI hiring tool accidentally discriminates against female candidates because it was trained on historical data.
  • Your PhD Skill: You understand structural inequality. You know how to interrogate a dataset not just for “bugs,” but for societal bias. You can predict how a tool might marginalize specific communities before it even launches.

2. Defining “Fairness” (Your Philosophical Rigor)

  • The Problem: A company wants its AI to be “fair.” But what does fairness mean? Is it equal outcome? Equal opportunity? Or procedural justice?
  • Your PhD Skill: You have spent years defining complex, abstract concepts. You can operationalize “fairness” into a concrete framework that engineers can actually build towards.

3. Anticipating Unintended Consequences (Your Historical Context)

  • The Problem: A new social media algorithm maximizes engagement but accidentally radicalizes users.
  • Your PhD Skill: You are a historian of cause and effect. You can analyze second-order and third-order consequences. You can build the “Red Team” scenarios that predict how a well-intentioned tool might go wrong in the real world.

What Does an “AI Ethicist” Actually Do?

At McKinley Research, our Responsible Innovation practice doesn’t just write philosophical essays. We solve specific business problems.

  • Algorithmic Auditing: Reviewing an AI system to identify risks of bias or harm.
  • Policy Development: Writing the internal “Constitution” for a company’s AI—what it is allowed to say, do, and generate.
  • Stakeholder Impact Assessments: Interviewing the people who will be affected by an AI system (not just the users) to ensure their rights are protected.
  • Trust & Safety Strategy: Designing the guardrails that keep an online platform safe from disinformation and toxicity.

Your PhD is Your Competitive Advantage

The tech world has enough coders. It has a shortage of critical thinkers who can navigate ambiguity and ethics.

At McKinley Research, we believe that the most powerful technology requires the deepest human insight. We value the rigor, the historical perspective, and the ethical reasoning that PhD scholars bring to the table.

If you want to shape the future of technology—not by writing code, but by writing the rules—a career in AI Ethics and Strategy might be your perfect path.

Ready to apply your critical mind to the biggest challenges in tech? Contact McKinley Research to learn about careers in Responsible Innovation.