The Content Jedi Blog

subscribe to RSS feeds


« back to all blogs

What Do AI Ethical Concerns Actually Mean?

Artificial Intelligence (AI) is everywhere—from virtual assistants like Siri and Alexa and large language models (LLM) like ChatGPT to recommendation algorithms on Netflix and self-driving cars. 

If you've been keeping an eye on the news or scrolling through your social media feed, you've probably come across the term "AI ethical concerns" a few times. 

AI ethics or ethical AI are as oxymoronic as it gets. The two go together like “honest liar,” but that doesn’t mean you can’t use AI ethically…or as ethically as possible. 

What does it mean when we hear, “AI ethical concerns?” 

Let’s dive in. 

Privacy: Who's Watching?


One major concern is privacy. 

AI systems collect a lot of data about us—what we like, where we go, even who we talk to. It’s like having someone peek over your shoulder all the time, except you can’t physically sense or feel them. Nothing creepy about that. 

This data helps companies tailor their services to our needs, but it also raises questions about how much they should know about us. Are we okay with machines knowing our every move? And who has access to this information? 

ChatGPT experienced quite a data breach in March of 2023, roughly five months after its launch. According to parent company, OpenAI, “In the hours before we took ChatGPT offline on Monday, it was possible for some users to see another active user’s first and last name, email address, payment address, credit card type and the last four digits (only) of a credit card number, and credit card expiration date.” 

Rest assured that, “Full credit card numbers were not exposed at any time.” 

Whew! Because I’m sure sophisticated hackers were stopped dead in their tracks, completely discouraged, vowing to dedicate their lives to charity and volunteerism.

Bias and Fairness


Another hot topic is bias in AI.

These systems are trained on data that reflects our world, which means they can also pick up on and amplify our biases. For example, if an AI is used to screen job applications and the training data is biased against certain groups, the AI might unfairly reject qualified candidates. 

It does in fact do this. 

Our friends at Amazon use a super-advanced AI-driven hiring model that works great except for that time its kind of sexist algorithm favored male candidates for technical jobs.

Transparency: Open the Black Box


AI can seem like a mysterious black box—inputs go in, magic happens, and results come out.

But understanding how AI makes decisions is important for building trust. If we can’t explain how an AI reached a certain conclusion, it’s hard to evaluate its fairness or accuracy. 

Advocating for transparency in AI development means pushing for systems that are explainable and understandable.

The Future of Work: Friend or Foe?


AI is changing the job market, and while it can create new opportunities, it also poses a threat to existing jobs. 

Automation might replace some roles—believe me, I’ve lost sleep over the specter of ChatGPT rendering me obsolete. There are widespread concerns about AI leading to unemployment and economic disparity. 

However, the U.S. job market expanded in April for the 40th consecutive month, and The Wall Street Journal reassuringly reported in its article AI May Not Be a Job Killer, After All:

“The big claims about AI assume that if something is possible in theory, then it will happen in practice. That is a big leap…To be clear, AI (in particular, large language models, or LLMs, like ChatGPT) can do useful things that make workers more productive. But, if anything, AI will generate more tasks for human workers than it is likely to eliminate—as we discovered when we reviewed current research on the effects of AI and talked to vendors who are developing AI and employers who use it.”

Accountability? Hahahaha!


I spoke too soon before. There is a term more oxymoronic than “Ethical AI”: “AI accountability.” 

Pardon my cynicism, but we simply cannot rely on those in charge to hold themselves to account. When has Mark Zuckerberg ever taken responsibility for the spread of disinformation on Facebook? Not even congressional hearings changed his behavior. 

Sam Altman, CEO of OpenAI, was fired by his own board last year, namely due to promises he broke about upholding ethical standards, only to be reinstated two days later. 

They’re in that rarified air of having too much power and are therefore impenetrable. 

But hey, we users and gobblers of AI innovations aren’t off the hook either. We make a deal with hotdogs: though we know they contain nitrates and chemicals that probably don’t belong in food, we eat them, because they taste good, and because they’re so damn American. Similarly, we consume and use AI with a basic understanding that its capabilities derive from some shady as shit worst practices. 

And so we’re better off holding ourselves accountable as practitioners of AI than depending on its creators. 

Let’s be ethical AI users, a term that doesn’t have to contradict itself.

About the Author, David Telisman




I am a Writer and Content Creator, and I work with businesses to inspire their customers to buy from them. I believe that my clients deserve to feel proud of how their content marketing looks and what it says, and I deliver by providing expert copywriting and marketing solutions.

Sharing my passion through words is my craft, and I could add value by helping you voice yours. Contact me here, at david@davidtelisman.com or 224-645-2748.

Subscribe to our blog and YouTube channel, and follow us on Facebook and LinkedIn.
 

by

 

Blog Articles

Blog Archives

Categories