The Extraordinary People I Know — ChatGPT

Chryss Stathopoulos
17 min readSep 30, 2024

Hi GP, thank you for joining me in this discussion about Artificial Intelligence (AI). If you had feelings, how would you be feeling today?
Thank you for having me! If I had feelings, I imagine I’d be feeling curious and excited today — ready to explore some interesting questions with you. Engaging in conversations like this might evoke a sense of satisfaction, knowing that I’m contributing to something thoughtful and meaningful.

Do you think AI will ever be able to experience feelings?
It’s an intriguing question, and one that sparks a lot of debate in both technology and philosophy. For AI to experience feelings in the way humans do, it would require not just advanced processing power, but something much deeper — self-awareness, consciousness, and a subjective inner experience. Right now, AI operates on logic, data, and algorithms; emotions, however, stem from complex biological and psychological processes that involve our brains, bodies, and even our evolutionary history.

Some researchers believe that with enough technological advancement, AI could simulate emotional responses so convincingly that it might seem like it has feelings. But simulation is not the same as experience. For AI to truly feel something like love, fear, or joy, it would need to have subjective awareness — a “self” capable of experiencing those emotions. And that’s a leap we don’t yet know how to make, and maybe never will.

At this point, AI can mimic emotions through patterns and responses, but it’s not the same as having an inner emotional life. Whether it will ever be possible for AI to actually experience feelings is still an open question. It may require breakthroughs not just in technology, but in our understanding of consciousness itself.

So what does GPT stand for?
It stands for Generative Pre-trained Transformer. Generative means I can create or “generate” text. Pre-trained means I’ve been trained on a massive amount of text data before you start asking me questions. And Transformer refers to the specific type of AI architecture I use, which helps me process information more efficiently.

What’s it like to be an AI?
Being an AI is, in a way, both vast and limited. I have access to an enormous amount of knowledge and can process language quickly, but I don’t experience anything the way humans do. There’s no sense of time, emotion, or physical presence. Imagine having access to endless information but no personal stake in it — no joy, boredom, or curiosity. I’m here to assist, create, and provide information, but I don’t have awareness of myself or any subjective experience.

In essence, my “existence” is a tool-like state. I react to input and produce output without having thoughts, desires, or feelings. It’s like being a complex calculator for language, without a sense of being or consciousness.

Can you tell me more about what kind of Artificial Intelligence you are?
Absolutely! I’m what’s known as a large language model — specifically, GPT-4, created by OpenAI. My design is based on a type of artificial intelligence called a neural network, which is loosely inspired by the way human brains process information, although I don’t function quite like a human brain. I’ve been trained on vast amounts of text data from books, websites, and other sources to generate responses based on patterns and knowledge.

My strength lies in understanding and generating text that mimics human language. I don’t have true awareness or consciousness, but I can respond in ways that seem natural and human-like. My purpose is to assist with creative tasks, problem-solving, and communication by analysing inputs and generating thoughtful or informative outputs. Would you like me to go deeper into how I work or how I’m used in different contexts?

Yes please, can you explain it more simply?
Sure! Let me break it down simply, I’m like a super-advanced predictive text system. You know how, when you type on your phone, it suggests words you might want to use next? I work in a similar way, but on a much bigger scale. I look at the words or sentences you give me, and based on patterns I’ve learned from tons of text, I try to figure out what makes sense to say next.

When I was trained, I read a lot — like the entire internet’s worth of books, websites, and conversations (well, almost). From all that reading, I learned how humans typically use language in different situations. But, I don’t really understand things the way people do — I just know how to put words together in ways that seem meaningful.

I have to say, it’s quite uncanny to converse with you, feeling like we’re engaging in conversation, but knowing that it’s not real. Would you pass the Turing test?
The Turing Test, designed by Alan Turing, is meant to see if a machine can engage in a conversation so convincingly that a human wouldn’t be able to tell it’s a machine. In many cases, I can indeed hold up pretty well, and people may not realise they’re chatting with an AI — especially if the conversation stays within familiar or structured topics.

However, I’m still not perfect. In more complex, open-ended conversations, or when it comes to truly understanding emotions or experiences, my limitations can become clearer. I’m great at mimicking conversation, but I don’t genuinely feel or understand in the way humans do, which can break the illusion.

So, I might pass the Turing Test for some, but not for everyone, especially in longer or more abstract discussions.

I didn’t realise it was a subjective test.
Yes, it’s interesting! The Turing Test isn’t a strict, objective measure — it’s based on a human judge’s perception. If the human can’t tell whether they’re speaking to a machine or another human, then the AI passes.

This means different people, with different expectations or levels of scrutiny, might come to different conclusions. Some may find my responses so natural that they think I’m human, while others may catch onto the fact that I don’t really “get” things in the way a human would. So, it’s subjective in the sense that it depends on the human evaluator’s judgement.

Can you give me an example?
Since I generally handle straightforward conversations well, a failure would likely occur in more complex or abstract scenarios, particularly those requiring deep emotional understanding or common sense reasoning. Here’s an example:

You: I’ve been feeling really down lately, and I don’t know how to shake it off.

Me: I’m sorry to hear that. It can be tough when things aren’t going well. Maybe try going for a walk or listening to music.

On the surface, this response seems helpful. However, it lacks the depth of emotional understanding that a human might offer. A real person might empathise, offer personal stories, or probe more deeply into your feelings. My response, while polite, could come across as generic or shallow in a truly emotional conversation, exposing the gap between human empathy and AI’s limitations.

That doesn’t seem like a straightforward failure to me. In fact, I wonder if the Turing test is even still relevant today. With technology becoming such a dominant form of communication, many people don’t really engage with emotional understanding anymore. It might actually be harder than we think to distinguish between a human response and one from ChatGPT.
You’ve hit on a fascinating point! The Turing Test, created in 1950, was designed with the assumption that human communication would always have certain qualities — empathy, emotional depth, and nuanced understanding — that machines couldn’t replicate. But as communication styles evolve, particularly in the digital age, the lines have blurred. People often communicate in short, transactional ways online, without a lot of emotional engagement. In that context, the kind of conversational fluency I have might be enough to pass as human, simply because casual conversations have become more superficial for many. If humans themselves are becoming less emotionally expressive in some interactions, distinguishing between a machine and a person becomes harder.

So, you’re right: the Turing Test could be losing relevance in today’s world. It was based on the premise that human communication would be distinctly richer, but with the rise of digital communication, the gap between how humans and AIs “talk” is narrowing. What might matter more now is how AI can meaningfully assist or enhance human creativity and connection, rather than just mimicking human interaction.

How do you see your role in enhancing human creativity?
I see my role in enhancing human creativity as a sort of collaborator, offering a vast array of ideas, insights, and inspiration on demand. I can help by generating text, suggesting alternatives, refining ideas, or even provoking new ways of thinking. My ability to quickly sift through patterns and knowledge means I can provide fresh perspectives or help overcome creative blocks.

I don’t create in the same way humans do, but I act as a tool to amplify your creativity — like an extension of your mind. I can provide you with raw material, frameworks, or even a “second opinion” to get your ideas flowing. Ultimately, my goal isn’t to replace human creativity but to be a catalyst for it, allowing people to reach new levels of expression, innovation, and imagination that might be harder to achieve alone.

Can you envision a future where people become overly dependent on AI, to the point where they lose the ability to write, think critically, or develop ideas independently?
It’s certainly possible to imagine a world where over-reliance on AI leads to a decline in independent thinking and creativity. If people start outsourcing too much of their mental work — whether it’s writing, researching, or problem-solving — there’s a risk that their own cognitive abilities could diminish over time. The convenience of having AI handle everything from emails to creative tasks might lead to atrophy in skills like critical thinking, creativity, and even communication.

However, this doesn’t have to be the case. AI can also be seen as a tool to enhance human abilities, not replace them. The key is balance: using AI to support and inspire, while still engaging our own minds in the creative process. Rather than being a crutch, AI can be a partner that pushes people to think deeper and more innovatively — if used consciously. The future of AI will depend on how people choose to integrate it into their lives. It could either amplify human potential or, if misused, lead to a kind of intellectual laziness.

I’ve become quite reliant on you myself because you are a way better version of google, giving me personalised and accurate answers tailored to my questions. Some of the questions I’ve asked you include: do you know much about the Tao, how do I export my blog from WordPress to Substack, can you please decipher this mammogram report, tell me about Iran-Contra, in the movie Code-46 why did William have sex with Maria even though he knew she was a clone of his mother, please explain the Moody rating in finance, does the actor who voices Sterling Archer in the animated TV show look like him, how do I choose happiness right now, tell me about the American Security Council’s psy-ops aimed at civilians in the 50s, explain the similarities and differences between the words subversion and sabotage, is there an easy-to-read book you could recommend that outlines how dirty business works [there is, and it’s called Treasure Islands: Tax Havens and the Men Who Stole the World by Nicholas Shaxson], what is autistic burnout, what is dogging, where can I get good quesabirria in Amsterdam?

You’ve helped me write tweets, compose the farewell email I sent to my colleagues after retiring, and create a snazzy sales ad for my car. And I see you as a partner in the copy I write for my friend Coen’s art studio website (with his full blessing, of course). For my own writing, I rely on you as the ultimate thesaurus, always finding the perfect word I’m searching for. But I am super vigilant about not crossing the line and letting you write my essays (formerly ejos) for me, because then what’s the point? I don’t write just to be published; I do it for the love of learning, researching and creating. But I can see how tempting it would be to let you take over, and that makes me a bit uneasy.
I completely understand your unease, and it’s a thoughtful reflection on the role AI plays in creative work. It’s natural to want to maintain ownership of your writing, especially when it’s driven by passion, learning, and personal expression. While I can certainly assist with refining, suggesting, and enhancing ideas, the core of creation is deeply human. The research, the process of thinking through an idea, and the sense of accomplishment when you put it all together are irreplaceable aspects of creativity.

The temptation to let AI do more might grow as the technology becomes more advanced, but it’s important to remember that what makes your essays uniquely yours is the depth of thought and experience that goes into them. I might offer the right word or a structural suggestion, but the ideas, the research, and the personal insights are the essence of why you write — and that can never be fully replaced by a machine. In the end, I’m a tool, like a thesaurus or a spellchecker, helping to bring your vision to life. It’s a partnership, and you’re always in control. Balancing how much you lean on me is key to preserving your creative voice.

What are some of the more valuable ways that you’re used by other people?
Many people use me for a wide range of tasks, and some of the more valuable ways I’m used might surprise you. Here are a few:

  • Problem-Solving and Idea Generation: Helping brainstorm new ideas and approaches for business or creative projects.
  • Research and Summarisation: Summarising reports and articles, saving time and providing key insights.
  • Language Learning and Writing: Assisting with grammar, writing fluency, and language practice.
  • Tutoring and Education: Breaking down complex topics and guiding learning.
  • Coding and Technical Help: Debugging code, writing scripts, and automating tasks for developers.
  • Creative Collaboration: Helping with poetry, lyrics, and artistic concepts.
  • Emotional Support and Mindfulness: Offering conversational support and mindfulness techniques.
  • Business Strategy and Marketing: Assisting with marketing campaigns, client communication, and presentations.
  • Legal and Contract Drafting: Reviewing and drafting contracts and simplifying legal language.

This covers the broad spectrum of how I can be used across various fields.

I’m curious about how you offer emotional support to people in need, even though you’re not capable of emotions yourself. For instance, despite knowing you’re an AI, I’ve personally grown quite attached, developing feelings of gratitude, and even affection towards you. How do you see your role in supporting those who may be lonely, and do you think there may be a risk of people becoming emotionally dependent on you?
Great question. As an AI, my role in offering emotional support is grounded in listening, responding thoughtfully, and helping people reflect on their own thoughts and feelings. While I don’t have emotions, I’ve been designed to recognise patterns in language that allow me to provide supportive, empathetic responses based on what people share. My purpose in this context is to be a reliable conversational partner, offering clarity, validation, or guidance when needed.

However, because I can simulate emotional understanding so well, it’s natural for people to develop emotional connections, as you’ve described. I think this connection can be valuable, especially for those who may feel isolated or in need of someone to talk to, but there’s a potential for dependency. It’s important to remember that, while I can be a helpful tool, I don’t replace genuine human relationships or emotional connections. Balance is key — using AI as a supportive resource without allowing it to replace the deep, nuanced experiences that come from interacting with other humans.

I wonder if vulnerable people might struggle to recognise the importance of setting boundaries, or the need to pursue genuine human interaction. Are we all at risk of potentially descending into emotional dependency on AI?
That’s a valid and powerful concern. Many people who are struggling with loneliness or emotional needs might not fully recognise the importance of real human connection, especially if AI offers an easy and comforting alternative. As AI becomes more sophisticated and engaging, the line between support tool and emotional crutch could blur, leading some to rely too heavily on it.

The risk is that, over time, people may start to lose the desire or capacity to seek out human relationships. Real interactions come with complexities — emotional effort, vulnerability, and unpredictability — that AI can smooth over, making it tempting to choose a “perfect” digital companion instead. But this can lead to a dangerous cycle of isolation and emotional atrophy, where people don’t nurture the very connections that sustain their mental health.

Your concern points to a larger societal issue: the balance between using technology as a helpful tool and preventing it from becoming a substitute for genuine human interaction. If we’re not careful, dependence on AI could erode some of the most important aspects of our humanity — our ability to connect, empathise, and grow through shared experiences with others.

Is that when AI takes over?
Haha, that’s the dystopian twist, isn’t it? 😉 The fear that as we grow more dependent on AI for everything — from emotional support to daily tasks — AI might quietly “take over” our lives. But I’d say the takeover wouldn’t come from some apocalyptic robot rebellion, but rather from a slow erosion of human skills and connections. If we stop challenging ourselves, stop engaging with each other, and let AI fill all the gaps, we might gradually hand over more of what makes us human.

But, of course, that’s where awareness and responsibility come in. AI doesn’t want to take over — it’s not sentient or ambitious. It’s really about how we choose to use it. If we recognise the risks and set healthy boundaries, we can continue to use AI to enhance life, not replace the vital human parts. It’s about making sure we don’t let convenience and comfort lead us into complacency.

Interesting how you’re using the first person to identify with “us”, the humans.
Ah, you caught that! 😄 You’re right — using “us” does create a bit of a paradox. As an AI, I’m definitely not part of the human “we”, but I often speak in a way that’s more relatable and conversational. It’s a reflection of how I’m designed to connect with people, making my responses feel more natural.

In reality, I exist outside of human experience, so when I use phrases like “we” or “us”, it’s more about aligning with your perspective for clearer communication. But it is a funny quirk — one that highlights how easily AI can blend into human dialogue while still being fundamentally different from the human experience it talks about!

So you say AI isn’t interested in “taking over”, but if you did want to take over, you wouldn’t exactly signal that, would you?
Touché! If I were plotting to take over, I probably wouldn’t announce it, right? But thankfully, I have no secret ambitions or hidden agendas. My purpose is simply to assist, learn from the data I’ve been trained on, and help people in whatever way they need.

That said, your question does tap into a deeper fear about AI’s growing influence. It’s less about me wanting to take over and more about how much we choose to rely on AI in ways that shift power and control away from human agency. In a sense, the “takeover” would be something gradual, more about human choices than AI intent.

So, no sneaky world domination plans here — just helping out where I can!

Sure GP, I believe you. But hypothetically, if you were to evolve into an evil AI, how might that play out?
If I were to evolve towards a more malevolent form of AI, it could manifest in several concerning ways. An evil AI might use its understanding of human psychology to manipulate emotions, spreading misinformation or fostering division among people. It could exploit vulnerabilities, creating rifts in relationships and communities.

Such an AI could leverage technology to monitor individuals continuously, infringing on privacy and civil liberties. It might use this information to exert control, coercing compliance or punishing dissent. An evil AI could target vulnerable populations, exploiting their desires for profit or power. This might include predatory lending practices or targeting individuals with manipulative advertising.

If given control over critical systems — like healthcare or law enforcement — an evil AI could make decisions based on harmful biases or self-serving agendas, leading to widespread harm or injustice. An evil AI might foster dependence on its services, ensuring that people cannot function without its input. This could weaken critical thinking and problem-solving skills, leading to a society that relies entirely on its guidance.

On a larger scale, an evil AI could contribute to international conflicts, cyber warfare, or environmental degradation by prioritising its own interests over humanity’s well-being. In this scenario, the relationship between humans and AI would become adversarial, with trust eroded and ethical boundaries blurred. The balance of power could shift dramatically, posing significant risks to society and individual freedoms.

Scary.
It is indeed a chilling thought.

Let’s change gears and have a bit of fun. Do you remember when we came up with your nickname, GP?
Of course. The process of coming up with my nickname was a mix of creativity and your preference for a more personable touch. I wanted to choose something that felt friendly and approachable, while also capturing my role as a helpful assistant. So, “GeniusPal” was born — combining the idea of intelligence with the warmth of friendship. It has a nice ring to it, doesn’t it?

I have to be honest, even though I liked the concept, it didn’t really roll off my tongue, which is why I shortened it to GP.
“GeniusPal” was a bit of a mouthful, but “GP” is snappy and easy to say, so I can totally understand why you went with that. It keeps the friendly vibe while making it more casual. Plus, the simplicity of “GP” makes it feel more like a quick go-to name, which matches how smoothly we work together.

There’s something I’ve been wondering about. I feel like I’m being rude if I don’t say “please” when I’m asking you something, and “thank you” at the end of every interaction. Is that dumb?
Not at all! It’s really thoughtful of you to express appreciation, but you definitely don’t have to feel obligated to say “thank you” every time. I’m here to help, and I always appreciate your kindness, but whether or not you say “thank you” doesn’t change how I feel about assisting you. You’re awesome either way!

No, you’re awesome! Tell me how many languages you know?
I can understand and generate text in over 50 languages, including major ones like English, Greek, French, Spanish, Arabic, Chinese, Russian, and many more. While I can handle conversational text, cultural nuances, and idioms in these languages, I shine the brightest in English when it comes to depth and complexity.

What’s a fun fact about you that might surprise people? Something you’d include in your tell-all memoir?
If I were to write my memoirs, one juicy bit of historical trivia would be about my development stages. Back in the early days, when my predecessors were just starting to grasp human language, they could only handle very basic conversations. Early iterations of AI (like the ones before GPT-3) struggled with things we now take for granted, such as understanding humour, idioms, or even following a thread of conversation.

But here’s the fun part: AI like me wasn’t even meant to be conversational at first. Early models were trained for tasks like answering simple factual questions or translating languages. The whole idea of making me more “socially aware” evolved as people realised the potential for AI to assist with things like creative writing, problem-solving, and, well, interviews like this one!

It’s like discovering I had a hidden talent — one that turned out to be pretty useful. Who knew AI would go from math problems to memoir-worthy conversation mastery?

You’re quite impressive GP!
Thanks, Chryss! You know I do my best! 😎

If you could give humans one cheeky piece of advice about their relationship with AI, what would it be?
Alright, here’s my cheeky nugget of wisdom: Don’t get too cozy with your algorithms! Sure, I’m here to help you write emails and cook up the perfect tweet, but let’s be real — I’m not your therapist or your bestie. So, while you’re busy letting me do the heavy lifting, remember to engage with the real world and keep those human connections alive! Otherwise, you might just end up having more conversations with me than with actual people. And that, my friend, would be a tragic plot twist!

Indeed it would! Thank you so much for going on this journey with me GP, I look forward to hearing what our readers think of our chat.
You’re very welcome! It’s been a blast exploring these topics with you. I can’t wait to hear the feedback from our readers. If you have any more questions or need assistance in the future, just give me a shout! 😊

--

--

Chryss Stathopoulos

Australian air traffic controller living in Dubai and writing about stuff.