After delving into the world of artificial intelligence over the last few years, I’ve learned that it has limitations and is not the end-all-be-all tool like some may think. AI has it’s place in many things and can be used for research, ideas and as an assistant to projects but it does need a person to use, guide and check it. It needs a human to ask the right questions and in a correct way. It will also “hallucinate”, which is what it’s called when it starts spitting out incorrect and erroneous answers. There is another joining subject called “prompting” or “prompts” which is where it all starts. The prompt is the question and setup you give something like ChatGPT or Perplexity or Claude or a host of other AI tools, that is crucial to getting proper and correct answers that are usable.
One does not just ask the question and get the right answer. And when it’s done right, it still needs to be qualified and confirmed. Use AI all day long to write anything you want. But, is it correct? Does it have all the parts that are needed, like in a great direct response sales letter. Lets say one wants a sales letter or landing page. Ask one of the AI tools, like ChatGPT or Claude, to write one. How do you know this sales letter has all the parts and pieces required to do it’s job of converting? What does a sales letter have in it that makes it work? Here we have one of the liabilities in AI. One needs to understand a subject well enough to know if the answers are correct. Hence, the human-in-the-loop aspect of the entire picture.
AI is great if understood and used properly. If the right prompts are given to it, it can return what is required. Still, a living, breathing, thinking and observant human is needed to verify and qualify the output.
