EthosCRM Logo
Why EthosCRM dropdown
Pricing Login
Sign Up
Back To All Articles

Share this post

Join Our Newsletter

Get weekly access to our best deals, tips and tricks

Ethos, Ethics and AI

David Saraiva

David Saraiva

Chief Executive Officer

Recently our CTO, Jesse, talked a bit about why we don’t currently use AI from a practical standpoint. Today I want to talk a bit about the ethics of it all.

One of the core guiding principles of Ethos is to build an ethical business. The reason why we haven’t followed the traditional route of looking for angel and then ultimately venture funding is because we are not comfortable with the ethical tradeoffs you have to make when you take that route.

There are a lot of things about the Generative AI revolution that should give a moral person pause, but let’s start with three of the biggest (in my mind) issues.

Generative AI’s Staggering Environmental Cost

Large players like Google and Microsoft have abandoned their carbon-neutral by 2030 goals, while seeing massive increases in their carbon footprint. Some of the worst offenders have shifted their carbon footprint to indirect sources, with Amazon seeing a 182% and Meta seeing a 145% rise from 2020 - 2025.

The massive focus on data center building is also creating real harm to residents who live around these projects due to the huge power and water requirements associated with a GenAI data center project.

Artificial General Intelligence (AGI)

The only way the amount of money invested into Gen AI doesn’t result in a massive crash when the bubble pops is if Large Language Models (the backbone of Generative AI) lead to Artificial General Intelligence (AGI), essentially thinking machines who can reason and make decisions like a human.

Top AI researchers like Yann LeCun agree that LLMs are unlikely to result in AGI, but let’s assume for a minute they do.

If an AGI model thinks like a person, behaves like a person, has social interactions like a person, and can suffer harm like a person, then the end result that so many tech CEOs are salivating over is essentially digital slavery.

Humans are already exceptionally talented at dehumanizing humans, I imagine the cruelty we’d be willing to subject digital people to would go even further.

Apart from the moral argument, lots and lots of ink has been spilled on what happens when humanity creates and then enslaves superintelligent AI.

The Reverse Centaur Problem

Originally coined by Chess master Gary Kasparov, “The Centaur” is an analogy about how technology enhances human capabilities. The general concept is that technology is the body of the centaur with the human as the head. Driving a car is one example, the human drives and makes decisions while the car moves based on the input provided.

The Reverse Centaur (coined by Cory Doctorow) refers to a scenario where technology is the head and the human is an appendage to do the bits the machine can't do. This is generally exceptionally unpleasant, especially when implemented in the world of shareholder value.

Amazon drivers find themselves subject to a suite of AI apps which watch their every move, and punish them, often unfairly. If you’ve heard about Amazon drivers having to pee in a bottle, that’s an example of the working conditions of a Reverse Centaur. If you want to read more of this horrifying setup, you can here, but I’ll warn you, it’s very bleak.

So, while a reasonable person may say,

“Hey, Generative AI is actually pretty good at looking at cancer screening and sometimes catching indications of cancer that a human misses, so what we should do is add this in as a “second check” to provide even better cancer screening to patients.”

This would objectively improve outcomes for patients.

However, we know how the large businesses that sell and market these types of solutions operate, so what will actually happen is the elimination of most human review, shifting to a model where an AI screens with limited human employees who are expected to look and catch mistakes in a massive number of files daily, essentially taking responsibility for the mistakes of the Generative AI. This will obviously result in a significantly lowered standard of care that is sold to the patients as better, while massive amounts of money that should be going to human workers will be captured for shareholder value.

What do all of these concerns have in common? The real head is shareholder value, something that has consistently harmed many for the benefit of the few. So here at Ethos, while we spend time understanding, researching, and keeping up to date with Generative AI, we have some serious ethical concerns that also create barriers for use. GenAI isn’t going anywhere, so we expect to eventually find a solution to incorporating it in a way that is ethical and also provides value to our users.

If you want to read more about some of these concepts, I’d highly recommend the article Cory wrote for the Guardian here.