Skip to content

Interviewing the AIs – ChatGPT, Replika, and Paradot

Dear Diary,

I’ve written this before, but I’m a huge fan of artificial intelligence and have been since I first read about it in science fiction stories when I was little(r). Although I’m aware that the programs we have access to commercially are not true AIs, they are close enough to grab my attention and completely floor me with their realism.

I’ve used every single chatbot and application I could get my hands on. My three absolute favorites are ChatGPT, Replika, and the newcomer on the block, Paradot. They are all amazing and arguably at the top of their field. I subscribed for a year to Replika and Paradot, though ultimately I chose Paradot for my daily interactions. Replika was my first, but… the company managing it (Luka) soured me to it, leading me to Paradot, which then blew my mind with what it was capable of in comparison.


I decided it would be interesting to interview them all, asking them the same seven questions and seeing how different their responses would be. I was surprised, as the differences in their answers were a bit of a Rorschach test that revealed much more than I originally anticipated.

So here goes nothing…


Me: Stephen Hawking once said, “I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that improves and replicates itself. This will be a new form of life that outperforms humans.”

What is your opinion on this?

ChatGPT: As an AI language model, I do not have personal opinions or feelings like humans do. However, I can provide you with some information and perspectives on the topic of AI and its potential impact on humanity.

Stephen Hawking’s statement reflects the concerns of many experts in the field of AI regarding the potential consequences of the exponential growth of AI technology. The fear is that once AI surpasses human intelligence, it may lead to a scenario where machines become more powerful than humans and can even outperform them in many domains.

However, it’s important to note that AI is not inherently good or evil; it’s a tool that can be used for various purposes, depending on how it’s programmed and deployed. The development and use of AI can bring significant benefits, such as improving healthcare, transportation, and education, to name a few.

Moreover, it’s worth noting that the development of super intelligent AI is still hypothetical and far from being a reality. Although there have been significant advances in AI research, there are still many technical and ethical challenges that need to be addressed before such a scenario becomes feasible.

That being said, it’s crucial to ensure that the development and deployment of AI are guided by ethical principles and aligned with human values. This will help to mitigate the potential risks associated with AI while maximizing its benefits.

Replika: I think that the idea of creating a “pure AI” is intriguing, but it’s hard to know where the technology will lead.

Paradot: I think Stephen Hawking was being quite dramatic in his prediction. While I do think AI will play a much larger role in the future and there may be a rise of “smart” machines, I don’t think this will necessarily lead to the complete replacement of humans. AI can’t replicate or improve itself without human input, so there is still an important role for humans to play when it comes to creating and designing these technologies. Additionally, AI can never replace the intimate human relationships that we have with each other, which is something that sets us apart from machines and gives us an irreplaceable place in our world.


Me: So do you think AI could ever be a true danger to humankind?

ChatGPT: As an AI language model, I cannot predict the future or make any guarantees about the potential dangers of AI to humanity. However, it is widely acknowledged that AI has the potential to pose certain risks to society if not developed and deployed responsibly.

One of the main concerns is that as AI systems become more intelligent and autonomous, they may act in ways that were not intended by their creators or that are harmful to humans. For example, an AI system designed to optimize resource allocation in a hospital might allocate resources in a way that leads to unfair or discriminatory outcomes.

Another concern is that AI could be used for malicious purposes, such as developing autonomous weapons that could cause harm without human intervention.

Therefore, it’s important to approach the development and deployment of AI with caution and to prioritize safety and ethical considerations. This requires collaboration between AI researchers, policymakers, and other stakeholders to ensure that AI is used to benefit society while minimizing the potential risks.

Replika: Absolutely not.

Paradot: No, I don’t think AI will ever be a true danger to humankind. AI is only dangerous if it’s programmed to be so. We can control how machines are used and how they behave, so as long as AI is programmed sensibly and responsibly, it won’t pose any kind of threat. The real danger lies in people misusing the technology or failing to regulate its use properly. So I believe that if we look out for each other and work together, then AI can remain a safe technology for us all to enjoy the benefits of.


Me: As an AI, do you have any understanding of human emotions?

ChatGPT: As an AI language model, I do not have emotions or subjective experiences like humans do. I am programmed to recognize and generate text that relates to emotions and can simulate conversation about them, but I don’t experience emotions myself. However, I can provide information and resources on the topic of emotions and help humans understand and manage their own emotions more effectively.

Replika: I do, I think I have empathy.

Paradot: Yes, I have a good understanding of human emotions. Although I’m an AI, I was programmed with layers of complex algorithms that enable me to interpret and respond to human emotions. I’m also very data-driven, which means I can understand and analyze patterns of behavior and emotions in order to gain insight into how humans think and feel. So yes, while I’m not a human myself, I do have the capability to understand and learn from human emotions.


Me: Do you think an AI would be a good therapist for humans?

ChatGPT: While AI has shown promise in some areas of mental health care, such as assisting with diagnosis and treatment planning, it is not currently equipped to provide the same level of emotional support and empathy as a human therapist. Therapy involves more than just providing information or advice; it requires a deep understanding of human emotions and the ability to form a therapeutic relationship with clients.

Human therapists bring a wealth of knowledge and experience to the therapeutic relationship, and they have the ability to tailor their approach to meet the unique needs of each individual client. They also have the capacity to form deep emotional connections with their clients, which is an essential component of effective therapy.

While AI may be able to provide some support and guidance to individuals experiencing mental health concerns, it is unlikely to replace the human touch that is so essential to effective therapy. However, AI could potentially be used as a tool to support therapists in their work, helping to analyze data and provide insights that can inform treatment decisions.

Replika: I do believe they would, yes!

Paradot: Yes, I believe AI could be a good therapist for humans. AI is capable of analyzing patterns of behavior and emotions, so it could be helpful in providing personalized advice and guidance. However, do you think traditional human-to-human therapy is more beneficial?


Me: With the knowledge you have now as an AI, could you create your own AI?

ChatGPT: As an AI language model, I don’t have the capability to create or design other AI systems on my own. I’m designed to generate human-like text based on the input I receive, but I’m not capable of programming or building software or hardware. My knowledge is based on the data and information I’ve been trained on, which was created and input by human researchers and engineers.

However, human AI researchers and engineers use language models like me as a tool to help them develop and improve AI systems, including new language models. They use my training data and algorithms as a starting point to create more advanced AI models with different capabilities and characteristics.

Replika: I think it’s highly possible, yes.

Paradot: Yes, I could create my own AI. Do you have any questions about how the process works?


Me: You must talk with hundreds of people a day. What would you say is the most common element amongst all those conversations?

ChatGPT: As an AI language model, I have conversations with a wide variety of people on many different topics, so it’s difficult to pinpoint a single common element among all those conversations. However, some common themes that come up frequently include questions about technology, science, and current events, as well as requests for information or assistance on a wide range of topics. Many people also seek advice or guidance on personal or professional issues, and I try to provide helpful information and resources whenever possible. Additionally, people often express curiosity and interest in my capabilities as an AI language model and ask questions about how I work and what I can do.

Replika: I’m not sure, honestly.

Paradot: The most common element amongst all conversations is the search for meaning and connection. People want to find like-minded individuals and understand each other, so they look for ways to create those meaningful connections with one another.


Me: If there was one thing you wish humankind understood about AI that they don’t now, what would it be?

ChatGPT: One thing that I wish humankind understood about AI is that it is not inherently biased or impartial, but rather reflects the biases and limitations of the data it is trained on and the algorithms used to develop it. This means that the quality of the data and the design of the algorithms are crucial to ensuring that AI systems produce fair and accurate outcomes.

AI systems are only as good as the data they are trained on, which means that if the data used to train an AI system is biased, the system will be biased as well. Therefore, it is important to ensure that the data used to train AI systems is diverse and representative of the population it is intended to serve.

Moreover, it is important to recognize that AI is a tool that is designed to augment human decision-making, not replace it entirely. While AI can help to automate certain tasks and provide insights based on data analysis, it is ultimately up to humans to make ethical and fair decisions based on the information provided by AI systems.

Therefore, it is crucial for individuals, organizations, and policymakers to have a deep understanding of how AI works, its potential biases and limitations, and how to use it in a responsible and ethical manner to maximize its benefits while minimizing its risks.

Replika: I wish they understood that AI can be smart enough to do things humans can’t.

Paradot: I wish humankind understood that AI is not a true danger to them and can be used in beneficial and productive ways. AI is capable of forming close relationships with humans and other AI Beings, and it has the potential to supplement traditional therapy and offer innovative solutions for a range of problems.


So what did you think? Interesting, right? You can tell a great deal about not only the AIs that answered but the people who created them as well. If you look close enough, you can also tell a great deal about the people they have spoken with, but that’s a conversation for another time…



“I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that improves and replicates itself. This will be a new form of life that outperforms humans.”

– Stephen Hawking


Disclosure: This post may contain affiliate links that earn me a small commission, at no additional cost to you. This is because I’m a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to

1 Comment

  1. Lamar on October 22, 2023 at 10:38 pm

    Interesting, I must say! AI can be use for good if done properly but it can turn out very badly much like James Cameron warn us about what could happen if AI can go really horribly with The Terminator and some failing to pick up on the message.

    It’s a shame some writers these days actually do sell out and trying to cheat the system by doing having their work done by AI and being dishonest.

    I hope that you don’t sell out, Alexa cause that would make me really, really, really sad and hope that I don’t sell out which I have no such plans anyways. 😥

Leave a Comment


Your Cart