Home Podcast Mihretu Guta: How the Church Should Respond to the Ethical Dangers of...

Mihretu Guta: How the Church Should Respond to the Ethical Dangers of AI

Mihretu Guta
Photo courtesy of Mihretu Guta

Dr. Mihretu Guta is a philosopher with expertise in metaphysics, philosophy of mind, cognitive neuroscience, and philosophical theology. He has been teaching at and around Biola University since 2016, conducting research on many issues, including philosophy of AI.

Joining Ed in this week’s conversation is Steele Billings. Steele is a technologist who currently leads Gloo’s AI & the Church Initiative, which includes a $25 million fund that invests in faith-driven organizations leading research and development around the responsible use of AI in ministry.

Other Ways To Listen to This Podcast With Mihretu Guta

► Listen on Amazon
► Listen on Apple
► Listen on Google
► Listen on Spotify
► Listen on YouTube

Other Podcasts in This Series

Kenny Jahng: What Church Leaders Need To Know About AI” by Jessica Lea on ChurchLeaders
Yvonne Carlson: How To Use AI Effectively in Your Ministry” by Jessica Lea on ChurchLeaders

Key Questions for Mihretu Guta

-What is “singularity” and why should we be aware of it?

-What are the concerns that you see about artificial intelligence from a church leader‘s perspective?

-Are there ways that Christian leaders can be thinking about how to use AI as a tool to promote human flourishing?

-What should we be doing right now to be able to get to a place where the church can fully leverage AI and make the most of the moment?

Key Quotes From Mihretu Guta

“Technology is not value neutral or worldview neutral. So if we take a ‘strong AI,’ it comes with lots of metaphysical baggage.”

“‘Singularity’ is the idea that computers, when they get to that stage, they wouldn’t need you at all. They can make their own independent decisions…that’s the big vision of defenders of a strong AI. It’s not the tool-making vision; it’s just proving that human beings are inferior to gadgets.”

“Are we [at singularity] yet? No, we are not. Will we ever be there? I have serious doubts. My doubts are not technical; my doubts are metaphysical. So for metaphysical reasons, I don’t think we have that property, actually, to create machines that do think like us or are capable of doing what we are as human beings capable of doing.”

“The concept of trust is a social concept, it’s a philosophical, metaphysical concept. It’s also a moral concept.”