Why Using Your Own Voice Is More Powerful Than AI-Generated Content?

With OpenAI and ChatGPT, the world of AI-generated content is about to begin. ChatGPT can actually learn from its mistakes and constantly improve with its reinforcement learning. While that sounds great, that does not spell the end for human-generated content. At the end of the day, people can still tell if something is fake or AI-generated. Your own human voice is still going to be much more powerful compared to an AI’s. Learn exactly why today!

Join Tom Hazzard and Tracy Hazzard as they share their insights on the future of AI-generated content and why it’s not enough to overtake human-made content. Discover what OpenAI is and why ChatGPT is all the rage right now. Just how far can artificial intelligence learn? Find out today!


Watch the episode here


Listen to the podcast here

Why Using Your Own Voice Is More Powerful Than AI-Generated Content?

Welcome back to the show. I’m with my better half. You are going to realize why she’s my better half in this episode. Before I talk about this episode’s topic, we need a celebration moment because this is our 200th episode. I didn’t even know that until now.

I got the message from the team. We have a team who submits our episodes for us. I don’t see the numbers until they publish them. We wouldn’t have recorded knowing that in mind. It would’ve been published, and this one would randomly be episode 200. I got the message before we recorded here, and I told our youngest daughter that it was our 200th episode.

I said, “We recommend that podcasters celebrate. What would you do to celebrate recording 200 episodes?” She says, “Have a party with cake.” That’s what eight-year-olds think about. She started going off, “Have confetti and let’s have a Funfetti cake.” She was going wild with it. I was like, “This is what we all need though. That little bit of mindset shift to remembering that celebration is more than just an acknowledgment of it. We also need to have a little celebration. Maybe a little margarita later. We need to celebrate more. All of us do.

Two-hundredth episode margaritas, you’re on later. Let’s do it.

The calculation though is that I now have tipped over recording 1,000 episodes personally.

You’ve done much more than that.

I’ve recorded lots of interviews and all of those things, but I personally recorded and published over 1,000 episodes.

I’m surprised it’s that little because we did 650 with WTFFF!? You’ve had how many for The Binge Factor?

We’re about 150.


FYB 200 | AI-Generated Content


Two hundred here. I thought it was well over 1,000. Anyway, it’s a lot. That’s great. Aside from the celebration moment, thank you for indulging us to do that. I think all of you that achieve a milestone like that should celebrate and acknowledge it because it’s a significant achievement. Your average podcaster doesn’t get past twenty.

It’s an accomplishment.

We have a fantastic topic and message we want to share with you. This is meaningful and pretty powerful stuff. Tracy, do you want to say something before I reveal the topic?

I wanted to say thank you to you for being my cohost for 31 years. As we’re recording this, this is not only our 200th episode but it is our 31st wedding anniversary. I wanted to thank you for being my cohost all along the way. The fact that you indulge my interest in some of these unusual things, like what we’re going to talk about in our topic, is one of the reasons I married you. We love to have conversations. We love to have a mindshare and think about these innovative topics. That’s one of my favorite things about being married to you and working with you.

Thank you very much. I appreciate it. The feeling is completely mutual. I could never have recorded 650 episodes of a podcast about 3D printing without you. For me, cohosting a podcast is a lot more fun. Not that I don’t get a lot out of podcasts that I record that are just me. I certainly do. I enjoyed providing that value to others. Doing this with you is the great pleasure of my life and being married to you for 31 years. I still feel like a teenager. I can’t believe it’s been 31 years. We actually met when we were teenagers.

I was seventeen when I met you.

You were 17 and I was 18. Let’s get on to providing some value to our readers. We’ll celebrate more later. This topic is exciting. I’ve been looking forward to this episode. We want to talk about why using your own voice is more powerful than AI-generated content. Why are we talking about this and AI-generated content in general? It’s because it’s happening all around us. A lot of you may not realize it, but Tracy has been studying and researching AI for a number of years and testing it. We’ve been doing some more testing with the emergence of ChatGPT, but I’m getting a little ahead of myself. Tracy, I want to throw it to you to lead us in this discussion because you truly are our resident expert in it.

I wouldn’t consider that myself. This is new territory. The reason why we brought up our anniversary and our milestones is whenever we hit an anniversary or a milestone, we start looking at what’s next. This is our opportunity to talk about the future. I’ve been considering for some while what the future of podcasting is.



There are lots of people who say the future of everything is AI or artificial intelligence. I’ve been looking at that saying, “I’m not so sure.” I like to look at everything with this slightly skeptical view. It helps me be a little bit more objective about figuring out the best way to use something when I approach it from the skepticism standpoint without sticking my head in the sand saying, “AI is never going to happen. Those people, I don’t believe in at all either.”

We look at innovation from the standpoint of it’s highly unlikely for it to take over the world like people believe because that doesn’t happen very often in innovation in general. It’s a whole lot slower usually than everybody imagines it will be. We were talking about our 3D print podcast, which kicked off everything that we do here. They predicted that everyone would have a 3D printer in their kitchen or in their house somewhere. How many of you have one?

They predicted it would be by now like microwaves, which it’s not.

It’s not the way it goes. We know that because we’ve worked on so much of this disruptive technology over time. Having this skeptical view of things helps us understand where the roadblocks are. When we get to be a big proponent of something, we jump on the bandwagon and we go on it, we have a harder time understanding how we’re going to bring everybody along with us.

That skepticism can be healthy. At the same time, I want our audience to understand that we have dived into this technology. We’re not just talking about it and being armchair quarterbacks here. We’ve been testing. You, in particular, have been testing AI with your podcast for a couple of years because others have come out with AI that was supposed to help with generating highlight clips automatically and things like that. Tracy, you’ve been testing stuff for a couple of years and found unimpressive results in the past.

I’m going to gloss over that a little bit. We’re going to do a future episode because I want more time testing out ChatGPT, specifically because I’ve only been testing it. It’s been available to most people for about a month. It’s had hit-and-miss access. Some of you may get logged in and it’s too busy. You can’t access it now. It’s been like that for me as well. I haven’t gotten as much time on it as I would like, but I have spent a lot of time on other AIs and testing them out. I’m going to provide some future guidelines, as well as discussions of some of those different types of AIs in a future episode.

Right now, we’re going to be focused on why ChatGPT is changing the game a little bit. That requires a slight history lesson. If everybody will indulge me for a moment and hopefully, it’s going to be understandable. The next time someone asks you, “What about this AI stuff in podcasting,” you have this like, “I know all about this,” and you can recite what I’m saying here or summarize it.

Most of the other AIs have been based on a couple of different technologies. One of them is DeepMind, which is Google’s version of things that you may have heard or seen tests about. ChatGPT is the third version of something that has come out from this whole group called OpenAI. I’m going to talk about and give you a little historical context.


We’ve been seeing some things that are building off it, and there are a bunch of other smaller AIs and other AIs that have been created in time. To truly be the artificial intelligence that ChatGPT is, it needs to learn. It doesn’t just need to automate something. Let’s be clear here. There are artificial intelligence that automates things.

Every time we see a chatbot, we think it’s artificial intelligence. The reality is it’s all programming. It’s pre-programmed to follow a path. That’s automation. I want you to understand the difference here. Not that it’s not smart because it’s fantastic and it saves us all time and energy. That’s great, but it is not learning. True artificial intelligence in this model learns in some way, shape, or form.

This is a big distinction, Tracy. You definitely need to share with our readers the way you did with me as we were talking about it in preparation here about what that difference is because you can understand what the leap is.

I’m going to touch back on this in a minute. ChatGPT, the GPT stands for something. It’s Generative Pre-Trained Transformer. Generative, meaning it’s going generations. It’s learning and moving to the next generation. It’s growing older and smarter. Pre-trained, so it has train wheels and guardrails, and some of those things on it. They pre-programmed some of this into it. It’s pre-programmed where it gets its information from. It’s pre-programmed to do some of these things. It’s pre-trained. It’s transformer, meaning it should take what it learns and where it is and transform that into something else for you. It should generate a result for you. That’s the idea of what it means.

It is what they call a reinforcement learning platform. It learns but it requires someone to reinforce its learning and give it feedback. There are lots of AI out there that don’t have reinforcement learning. They don’t have a feedback loop. We call them AI but there’s no feedback loop in it. The transcription system is famous but has no feedback loop.

There are many different transcription systems out there that people use to transcribe their podcast audio for one reason or another. They’re often called AI. Rev.com has one called Rev.ai. There are also others, and it’s a one-way flow of information from the audio to the transcript. There is no feedback loop, which inherently limits its ability to learn.

I want you to keep that in mind. What makes ChatGPT so interesting is how it’s learning and its reinforcement learning. That’s a significant part of it. I mentioned before the DeepMind, the Google version. It’s out there and it is a reinforcement learning process as well, but it has much tighter guardrails because Google owns it. It’s privately owned.

OpenAI, the company and the nonprofit, has a weird structure. OpenAI is technically a nonprofit. The revenue-generating platform that they’re building is the for-profit side, but it has one shareholder, which is the nonprofit. It’s an interesting structure. The criticism that has come is who are invested in the nonprofit. They don’t call it invested. They call it pledged because they’ve pledged the funds to the nonprofit. As the nonprofit needs it, they use it. The money goes in. It doesn’t flow and sit in as an investment does.

FYB 200 | AI-Generated Content
AI-Generated Content: To truly be the artificial intelligence that ChatGPT is, it needs to learn. It doesn’t need to automate something.


Elon Musk, Reid Hoffman, Jessica Livingston, Peter Thiel, AWS, Infosystem, YCombinator Research, and Microsoft. You have Sam Altman who is the CEO or I’m going to say the kickoff founder. It happened with a group of people, but he’s the kickoff founder. He’s from YCombinator fame as well. All those individuals, including AWS, Infosystem, and YCombinator Research, came in with a $1 billion pledge. They all pledged $1 billion at the beginning of this. Microsoft came in and matched that $1 billion, which is an interesting change. That happened.

OpenAI is built on the Azure platform, which is Microsoft’s cloud computing. It’s like AWS or their version of it. It’s critically important that it’s built on this. Right now, they’re gating the number of people on it for a reason. They don’t want to overwhelm the service. They don’t want to overwhelm the space. They want it to run. It requires a lot of computing power and heavy lifting to run AI. That’s part of why this is a good test for Microsoft in the future. It’s a good investment. My view of it is this is one of the smartest investments they could make in the last decade.

$1 billion seems like a lot of money, but I think it’s a drop in the bucket. They’re going to need ten times that in the near future in order to get where they’re going. Their plan is to create “The API.” It’s not API and API or multiple APIs. They call it “The API” for all to use. That is OpenAI’s mission. The idea that this is open for all is what is going to keep AI from going terribly wrong. This is their idea to prevent the Terminator Cyberdyne Systems future. Their idea is that OpenAI is based on the premise that in order to reduce the risk of a Terminator future, everyone should have access to AI. It shouldn’t be in the hands of corporations like Google. That’s how they’re going up against Google on this.

Others criticize that it’s going to go wrong anyway. The internet still has the Deep Web. There’s still the dark side like cryptocurrency and crime. There’s a dark side to all of these things. There’s always going to be, but if the majority of what it’s being used for is for good, maybe they’re right. I’m going to share my view on this with you in a minute, but I want to finish giving you the broad-brush history lesson on how this works.

DeepMind is the direct competitor. DeepMind is a London-based company that Google bought in 2014. DeepMind is actually a lot more mature than people think it is because it was already pretty mature in 2014 when they bought it. There are a lot of applications that they’ve already been testing and using it. There have been a lot of reports that Google actually scaled back because they felt it was too good. I can’t confirm whether this came straight from Google or not. I’d be happy to link you to the article for what I read on this one.

This is probably a good indicator of what seems likely for them to find out whether or not this is true. Imagine if your search on Google was so efficient and so quick, and it got you to the perfect answer and not just the listing of answers, are you going to spend as much time on Google as you do now? Are you going to get served up enough ads? If the AI works, then they lose money because the majority of Google’s money comes from advertisements.

This reminds me, Tracy, of you and I both did spend about two hours on the phone with someone who we regard as an expert in so many things tech. He’s an ethical hacker. That is a good way to talk about him or describe him. He thinks that within a couple of years, whatever ChatGPT becomes, this AI will fundamentally change Google, if not kill Google, at least the way they’re doing business now.

It’s going to harm their business model. Rumor is that they already have it. They already know how theirs works. It’s so good that it would harm their business. They’re trying to figure out what to do with it. Bringing out ChatGPT is smart because it pushes them to have to figure this out faster. I don’t think you should count out Google. That’s my view of the world. I don’t think you should count a company out that has much to lose who already knows what this can do. It’s not like they’re seeing someone else do this for the first time. There are a lot of benefits from ChatGPT and it’s amazing, but we don’t actually know how DeepMind works because it’s not actively in use. You can’t play with it to know.

Having this skeptical view of things helps us understand where the roadblocks are. Share on X

It’s not publicly available.

It could be great or it could be just as good. We don’t know. Rumor has it that Microsoft is prepared to put this into Bing as soon as March of 2023. Depending on when you’re tuning in to this, it could be after and you could already be seeing its effect on Bing. I think it’s smart, Tom. This is what I want to say. Microsoft’s business model is extremely different from Google’s. They don’t need advertising dollars. They have arrangements with Apple. Bing is the search that Apple is based on for some searches that they do.

What do you mean? Is Apple search based on Bing?

Some of the searches that they use underneath things to give you answers like the weather.

Isn’t that the Safari stuff? I knew that Bing is the search engine underneath Alexa and AWS.

Some of the searches are what I’m going to call more enterprise-level searches. In order to get you an answer on this, your phone is tapping into a Bing search to get the current results. It’s more of this API enterprise level. It’s not like I’m getting surfed up an ad because I’m using the Bing search engine like Safari or Chrome or something like that. It’s different. Bing’s model of business is a little bit more enterprise-focused. Otherwise, they would’ve killed it a long time ago. Why would you keep using it?

I think Bing is 1% of internet searches.

That’s using Bing as your search engine solely, not running underneath the searches. If you speed that up, you improve it, and you’re efficient with it, can you imagine how much more money you’ll make in enterprise solutions?

FYB 200 | AI-Generated Content
AI-Generated Content: If more dollars happen, it’s going to proliferate everywhere, and it’s actually going to be a better AI at the end of the day.


You already have your hooks into all these other devices and systems that use it. That’s pretty smart.

The second thing is that it’s very efficient and has a lot more mobile basis. When I search on my phone, I am less likely to spend more time going down a rabbit hole than I do on my computer. I’m not going to go through everything on the first page of Chrome, which is a Google-based product for those of you who don’t remember. I’m not going to go Google something and then spend hours going through the first couple of pages of it.

I’m going to click on the first thing that’s there. I need an answer quickly if I’m on my mobile device. When you’re wrong, I’m annoyed. Google gets the brunt of me being pissed off that they gave me a bad answer. I want a more accurate answer. This is where AI can be much more useful in the search process. If Bing is delivering those on a mobile device, then now we have a much better shot of getting a better answer.

If I ask my Alexa device for an answer and she gives me the best answer for me because she’s learning me, she knows me, our whole household will trust that faster and will continue to reinforce and use that in a better way than we would any other device to get an answer. I just did it now. I was checking Greenwich Mean Time against our local time and I asked her. What she didn’t do was give me the time zone in Pacific time. She didn’t give me the specific time I asked for. She gave me, “It’s noon in Greenwich Mean Time.”

It’s 4:00 in the morning in California. She didn’t tell you what time it is now, but maybe the way you asked the question was different. This gives us some great background on ChatGPT and this AI. Let’s get this to where our audience can understand why this is so next level.

ChatGPT is the third version of things that have come out from OpenAI. They’ve generated other ones that didn’t work as well as this one does. There were lots of criticisms about them. There were lots of issues with it like giving stupid results. I call it the Amelia Bedelia results. For some of you who don’t know Amelia Bedelia, it’s a children’s book. I hated that book as a child. I absolutely hated it. I didn’t understand why people would read that. Why would you be so literal about something? I never understood that. That indignant attitude is how we get to these AIs. We think they’re stupid because they take our questions so literally. They don’t get us.

It’s interesting the way your mind works and my mind works. It’s very different because I thought those books were very funny as a child. They were read to me as a child because it deals with nuance. It plays on words and puns. I always thought those things were very funny.

I was like, “She should be smarter than that. She should get us.” This is how we approach our AI. We look at it and we think, “If they don’t get us, then it’s not right. It’s not going to work. It’s never going to make it. It’s never going to happen.” That’s what happens. There are things like, “What would happen if Christopher Columbus discovered the Americas in 2014?” He didn’t discover the Americas in 2014. The AI has to be smart enough to say, “Hypothetically, if this happened.”

That indignant attitude is how we get to these AIs. We think they're stupid because they take our questions so literally; they don't get us. Share on X

That’s what they don’t do. Instead, they go through and give you what happened historically, but it didn’t make sense in 2014. This is how those previous AIs hadn’t been tested by others. The criticism of it is that it’s not smart enough to know that you shouldn’t answer these questions in that way. You should approach them in a different way. You end up with sometimes very controversial, racist, and sexist results because they’re based on taking something historical and stuffing it into a current day as an answer,

It isn’t very helpful.

There are a lot of criticisms about ChatGPT and about OpenAI having too many guardrails, filtering out sexist and racist results, and being too leftist in its viewpoint on the world or too politically correct, if that’s the way you want to say it. Here’s what I think. This is my view of it. This is my perspective on it. I think that they have to do that. Otherwise, the conversation is about the racist results that it puts out and not about what it potentially can do, and what you do in the process of caulking only about the controversy of it. That’s what people are going to go in. They’re going to going to test things out. What do we do when we get the internet? We look for sex. What is that song from Avenue Q?

It’s something about the internet being for porn or something like that.

That’s what people go to try and do. If we did that with this, then that’s all we’re ever going to talk about. It hurts future investment dollars. ChatGPT is nothing if not a shot across the bow to the entire industry saying, “We are where you should be investing your dollars. You should be joining our network. You should be joining OpenAI as a nonprofit. You should be a part of the future.” That’s what this is about. It’s about bringing in more dollars into it because if more dollars happen, it’s going to proliferate everywhere. It’s going to be a better AI at the end of the day.

If we leave it in this conversation about the controversy, we never get to its potential. That’s why I believe they did it. They didn’t do it to be leftist about this. They didn’t do it to be politically correct. They did it because you have to remove this from the conversation. For a decade now, that’s the conversation around where AI has gone terribly wrong in the past. Let’s remove it from the conversation. That’s what they did. That’s one of the criticisms that you’re going to hear lobbed against it. I think this was necessary in order to prove where they’re going with it and its mission on what it can do. You can see its great business potential for yourself. The reason ChatGPT is so brilliant is that it is based on reinforcement learning.

What does that mean, Tracy? Can you give us an example?

Let’s do it with headlines because I do this all the time. I use AI to generate headlines for articles. It generates a headline for me based on my copy and based on what I give it. I don’t like it so I don’t use it. That’s what happens a lot of times. I just don’t use it. Sometimes I might refresh. At least I’m telling it, “You didn’t give me the result the first time,” but it doesn’t know. If I copy and paste that headline out and then use it in my blog or on my podcast title, they don’t know I used it. They just know I stopped. All the headline tools I’ve ever used have no feedback loop because it’s not happening within the system.

FYB 200 | AI-Generated Content
AI-Generated Content: The reason ChatGPT is so brilliant is that it is based on reinforcement learning.


How is ChatGPT different? What did you do differently when you were testing it?

ChatGPT has built this in. Not only do I say, “I want to regenerate a new response,” but I type in a chat telling it what’s wrong and regenerate it. I say, “The headline is not emotional enough. I would like it to have a better tone. I would like it to be more exciting.” I would tell it those things and then say, “Regenerate.” It’s learning what I didn’t like about it naturally in conversation with me.

You’re actually communicating in ChatGPT with ChatGPT, providing it feedback, asking another level of depth of questions, or asking refinement. I remember you told me at one point you said, “I want it to have more energy.” You also used it in a way where you said, “I want this to read like it was said by a Hollywood announcer,” or something like that. You give it some context.

I gave it a context of what I was looking for, and then it generated a new one. It gets better every time. That’s fascinating. They’ve built into the process a way to encourage us to do the feedback loop. Humans are inherently lazy. When we don’t get the results we want, we don’t tell anyone. We just move on. Unless it was so offensive, we complain about it. When something is right, it doesn’t learn that it’s right. It needs to learn the reward that this was correct or that I liked the results.

They also have an upvote and a downvote. It’s encouraging us to say, “This was good and I’m moving on. I’m taking this and I’m going on to the next thing.” It wasn’t, “This was bad and I’m giving up. I downvote it. “ It’s giving you that thing that happens in Reddit. That’s a Reddit thing if you guys are users of that. They upvote and downvote things. By naturally encouraging the conversation, I’m telling it where I want it to go and why it wasn’t good. I’m getting it to refine quickly.

This is what happened to me. I tell it, “The headline is not energetic enough.” It interprets what I meant by energy. I tried not to give it a specific direction. It gave me some energy and then I said, “I wanted it to have positive energy.” I added that word because I thought it sounded energetic, but it was a little harsh. I wanted to have positive energy. It reworded it, and it was perfect. I clicked the up-vote and I moved on. The next time I typed in and asked it to do a title, it automatically gave me positive energy because it learned what I wanted. It starts from that place and continues on.

You have a chat that is ongoing within.

We’re going to talk about how to use these things in a good way in a future episode because it’s a whole conversation about how I use this and what I do. Essentially, I’m doing all my headlines in one place because I know it’s going to continually learn. It’s something I learned because I recommend to all of you the CoSchedule Headline tool, which I like. It has these terminologies, emotions, and more energy. It talks about them and how it keywords them and how it uses them. I bet you anything OpenAI knows about the CoSchedule Headline tool and it’s learning it. One of the things that I want you to be aware of is that the OpenAI platform and ChatGPT don’t have anything after 2021.

A lot of AIs are based on pulling the most common pieces of information that are out there. Share on X

That’s one of these guardrails they put on it because they had to limit the data set that it’s using. Is that right?

I think it’s not because of that. I think they don’t want it to inadvertently have an influence on current politics or something currently in the news like the war. It doesn’t want to inadvertently do something that’s so current in the information that it could have an influence on what’s happening right now. I think it’s a guardrail from that standpoint because they don’t know what people are going to use it for.

The whole point of putting this out is to see what people are going to use it for so that they can develop The API that makes sense. That’s what they’re trying to figure out. It is their business model for it. That’s what they need for this. One of the things that I discovered in this process of using it is that it’s brilliant and it’s conversational. The other AIs can’t get you to tell that you disliked it. You didn’t use it. It doesn’t learn enough when you don’t have that training built-in and in this conversational way.

I was confused. Why do you want another chat? I didn’t understand that before I got in to use it. I understand now because of this reinforcement learning process that it’s actually for us because we won’t do what they need it to do. We won’t do our share of the feedback loop because we’re too lazy. Creating this conversation makes us do it in a way that was very comfortable and easy for us. That’s how it’s going to get better and better. It’s why it’s so impressive because it also does interpret what I’m saying. It doesn’t do this thing where it’s not interpreting what I mean by energy.

It makes an interpretation. It doesn’t give up. It might get it wrong. I tell it what type of energy I’m looking for and I refine it. The next time, it does it right from that standpoint. It’s going to be interesting to see how it keeps playing out, the more I play with it, and how much smarter it is. This is what I found with almost all AIs that I played with before. They actually get worse over time.

Let’s explore that because I think it’s important. This is where there ends up being some good news for creative types like podcasters. We want to end on that positive note when we get there. We’re not there yet, but let’s talk about why other AIs have that problem.

A lot of AIs are based on pulling the most common pieces of information that are out there. Think about it as Google is taking the more is more model. If more people are saying this is out in the world, then it must be right. The AI is not making a judgment call. The search isn’t making a judgment call on that. Google eventually had to put in authority value just because people are saying it doesn’t make the right answer. How do we weigh expert resources heavier than the general public so that we get real health information or we get real weather information from the weather bureau, and not from the guy with the binoculars next door? Where are we getting real weather?

It becomes weighing your sources of information. The problem is when something is brand new. This happened to us in our 3D printing world. Things were moving so quickly in 3D printing that Google couldn’t keep up with it. One out of every five searches in Google has no results that it could do because it’s so brand new. Twenty percent of searches is something Google has never seen before.

FYB 200 | AI-Generated Content
AI-Generated Content: You’re getting an authority on the idea that Google sees you as a thought leader that they’re looking to for answers, and they’re going to be expecting you to give that to people.


People are searching for so many different things all the time every day. About 20% of what people type into Google, Google has never seen that exact search query before.

It’s astounding that it’s that high. What it shows you is that things are changing so quickly and we’re looking for answers. We’re looking for customized answers to us. That’s why it’s not the same. What I type in isn’t the same as what Tom types in. Our queries are all also diversely different. That’s a cool amazing thing about human beings. If our technology can’t keep up with that and can’t develop to that, where are they going to rely on their resources?

Remember that ChatGPT and OpenAI are pre-trained. It got its sources for some things, but other things, it has no source for it at all. Where is it going to find thought leadership on 3D printing? We were talking about thought leadership in 3D printing. Google was looking for this. The OpenAI ChatGPT might be looking for something like that in probably another topic because 3D printing is a little old. At the recording of this, this is a little bit older.

What we found when we were doing our podcast was that our podcast was so much faster and more current than articles and anything else that was coming out in the market. Because we were blogging them, they were in a written searchable format for Google. When some new question was asked of it, Google would send us the traffic automatically, assuming we had the answer.

Maybe because we were the most relevant thing that existed, even if they couldn’t find exactly what someone was searching for. Google rarely provides you with no results. They’ll provide you with the closest thing it knows of.

It will provide you with the most credible resource for getting the answer if they don’t see it readily apparent. They are saying, “These people have a podcast. They publish every week. They’re likely to cover this topic in the next week and we don’t know it yet, so let’s give them the traffic.” That’s where you get authority value. You’re getting an authority on the idea that Google sees you as a thought leader that they’re looking to for answers, and they’re going to be expecting you to give that to people.

Now, you’re in a thought leadership position. Very often, most websites are built to pull that information from other resources. The news feeds pull in information from other places, from their chosen sources, and from those chosen places. Almost every article that is written is citing resources or citing a quote from somebody. They’re not coming up with the original thought in almost every other place. Videocasts and podcasts are uniquely suited for that thought leadership position. What happens when they’re pulling information from the same old places and everyone is using the same old places? What happens, Tom?

This is what happened with earlier versions of AI. Without that feedback loop, whatever people are typing into it, it’s basically thinking that more is better. Whatever there’s more of is what we’ll give back to people. AI is still not capable of original thought. It’s only capable of providing you with things that it knows of that exist.

Things are changing so quickly and we're looking for answers. We're looking for customized answers to us. Share on X

It might be original to you. Let me be very clear. It might be something that’s outside your perspective or something you don’t have the resources to see, but it’s still group thinking at the end of the day.

That is the opposite of thought leadership. When we were talking about this, I was saying it sounds like thought followership.

That is 90% of TikTok and Instagram out there. They find a tip somewhere and then they reiterate it in their own way. They might be doing it in a creative and fun way and they’ve got their followers, but they aren’t the thought leaders. They didn’t originate those ideas. Thought leaders are very rare because they require a different mode of thinking. It cannot happen in AI. Someone has to lead that thought.

Now, the thought leaders are going to become the future of value for that AI to tap into to find out what’s new, what’s happening, and what the thoughts are on this process, and then start to group those thought leaders together and say, “Is this a good path? Does this make sense? Is this going to happen?” We’re going to see a lot of that occurring.

Your thought leadership and the fact that you’re putting it out in podcasts and videocasts consistently and constantly is the way to play with AI in a great way. Make sure that you are a part of that conversation in the future. The power of that original voice is going to happen. The other side of that is in an AI-generated world where I don’t know whether or not a bot generated this or a human did, while there are deep fakes of videos and deep fakes of voices, almost all good technology can detect that.

We can see when it’s a fake. We can see it in the audio pattern. You can see when it was faked. There are ways for even AI to tell if something is fake or not. There are ways for Google to tell if something is fake or not. When you’re showing up on camera and you’re showing up with your voice and it is truly your voice and your face, that in and of itself is where more reward is going to happen than using AI-generated stuff. That means that there’s going to be a downgrade in copy, blogs, and other things that don’t come from voice generation.

We see that there are a lot of AI tools being advertised and talked about that will generate a blog post for you in no time. Why write it the hard way when this tool can generate a better blog for you very quickly? It certainly can. There are lots of them out there doing that, but they’re only able to create these things from data that it knows already exists. How original can that be? There’s a danger and all sorts of potential issues around copyright and other things. Also, more importantly, in the eyes of the internet as not being original and not being a leader. I agree with you, Tracy. I think this is actually very exciting and good news for podcasters.

Right now, we’ve been seeing a lot of reports and there are a lot of regurgitations off of the reports saying that there’s a poor economic outlook. Tom and I did an episode about our view of that. We think they’re wrong. We think that they are only looking at the podcasting market for what’s less than 2% of the market, and that’s the ad-based market. They’re not looking at the ecosystem altogether.

FYB 200 | AI-Generated Content
AI-Generated Content: The big danger of AI is that everything sounds the same. The more you use it, the more you realize they’re giving you the exact same answer every single time.


All of you, especially our followers, the ones who are using podcasting for business and thought leadership, those types of nurturing of community, and those things cannot happen through a machine. Those things happen from humans. That in and of itself is why I think the economic outlook in an AI-driven world is completely underestimated for podcasting and videocasting, and long-form, not short-form. Let me be clear on that one. I think that we’re being sorely underestimated there.

Tom and I have had websites with blogs for a long time, over a decade. Every time there’s what they call a Google slap where Google changes its algorithm, our website got more traffic, and we kept getting that. We were astounding some of the search engine optimized experts who were like, “You’re not doing any of these backlink purchases and any of these things. How is it that you are outdoing every single one of those algorithm shifts?” It’s because every single time the algorithm shifts have been happening everywhere, they reward the original more and more.

That isn’t going to change. I think it’s only going to happen faster in this AI-driven world when they find someone with original thoughts, voices, and ideas, and make those connections in our heads that only humans are great at. We’re taking all these pieces of information, synthesizing them, and coming up with an original view of the world on what you should do in your business and how you can help people. This is how you sell your books and courses.

This is how you do all of what you do. Our lovely business podcasters, who I adore out there and who are bringing your messages and your views on the world, keep doing it. All these other things are tools. They might make the technical parts easier for you, but do not lose your original voice in this process, no matter what. That matters. That curation, viewpoint, and perspective cannot be done by a machine. It will never be able to be done by a machine in the way that you can do it.

To add a little bit to that, we’re not negative or down on AI or ChatGPT at all. We’re just saying, “Podcasters, you guys have something special with the message you’re bringing to the world that is going to be recognized and valued.” We’re bullish on this AI and opportunities to save time doing certain things and allow more creative energy to be put into others.

That’s my excitement about the AI market or artificial intelligence. It’s to take away the mundane stuff that I don’t want to do and be right about it so I don’t have to overthink it either. That’s amazing. I want that because it gives me more time to reflect and come up with better answers for you, to research things and think about where the future is going, and to advise you in a better way. I have more time for all of you when I don’t have to do those things.

That’s the beauty of that future world. That’s what I’m excited about. How can I harness AI to make everyone’s lives better? It’s with my view and perspective on how to truly harness innovation to make everything that you do shine. That’s my job here and what I will be concentrating on for the next year and a half on our use of AI within the company. We are implementing it. We already use it in various ways. How will we continue to use it more? How we use it and how we train it and what we do with it is only going to make sure that your original voice and your uniqueness shine through in that process.

The big danger of AI is that everything sounds the same. That’s what I’ve seen in every other AI I’ve used. The more you use it, the more you realize they’re giving you the exact same answer every single time, or the exact same results. The headline is exactly the same with a couple of words shifted. The format for the descriptions is exactly the same every single time. We can’t have that. That destroys the originality that you bring to the world. We don’t want that here. We want your original voice to shine through on top of the tools.

AI is still not capable of original thought. It's only capable of providing you things that it knows of and that exist. Share on X

This has been a much longer episode than our typical, but I think it’s a topic that required that. I certainly hope everyone has gotten some wonderful stuff out of it. I happen to think this episode probably is going to be one of the most replayed episodes to go back and say, “What did she say about that? What was the history part of ChatGPT?” That’s great. Please do that. We will take some deeper dives into niche aspects of this in the future as we have more to share. If you haven’t subscribed already, please do and stay tuned for future episodes on that.

We’ll be bringing you more.

Thanks so much, everybody. We’ll be back next time with another great episode. Until then.


Important Links



Picture of Tracy Hazzard and Tom Hazzard

Tracy Hazzard and Tom Hazzard

As podcasting and monetization marketing experts, husband and wife team, Tom Hazzard and Tracy Hazzard help major publications, sports stars, and entrepreneurial influencers broadcast their original messages. A highly successful inventor and product designer, Tom has been rethinking brand innovation to build in authority and high-converting revenue streams. Tracy brings an insider media/promotion perspective as a former Columnist for Inc. Magazine, contributor to BuzzFeed and international speaker. Together, they are the blog writers and podcast co-hosts for Feed Your Brand and The Binge Factor. They provide businesses of all sizes actionable tactics and strategies to spread marketing messages, grow valuable audiences, and retain valuable platform authority without a lot of time, cost or effort.
Scroll to Top