Unless you’ve been living under the proverbial rock, you’ve probably heard about the recent massive expansion of AI (Artificial Intelligence). This past year, graduate students and professors around me scrambled to catch up as students began to submit essays, reflections, and projects written by ChatGPT. Turnitin and other plagiarism detection software released AI detection features that….need a little extra work, to put it lightly. Fortunately, some students were still pretty easy to catch, as you can see in this hilarious tweet:
Others, of course, are craftier. I’m fascinated by AI and technology in general, especially from an ethical perspective. (My college admissions essay was about the ethics of mammoth cloning; I had no idea it was a sign of what I’d study for the rest of my life). So, I dove straight in, playing around with ChatGPT to recognize its current quirks and limitations. I was able to figure out a few things that helped me get through that semester of teaching. My major finding was that it’s an incredibly convincing liar; it’ll cite papers – or court cases – that don’t exist, make up content for books or essays it doesn’t have access to, and just tell tall tales in general. So, if you are familiar with a subject and the student is less familiar with it, you’ll be able to catch false “facts” fairly easily. It also has some recognizable patterns of speech, and although that can be worked around, it has a default writing style that you should be able to spot and differentiate from what students typically submit.
Of course, plagiarism isn’t the only ethics issue surrounding ChatGPT. While programmers have tried to place hard limits on using AI for illegal or harmful activity, people are quickly finding ways around that. Just this past summer, a massive amount of ChatGPT credentials were sold on the dark web. And, sadly Vice reports that at least one individual has already been coaxed into committing suicide by an AI chatbot. Writers worry that their employers will try to replace them with ChatGPT, which has sparked, amongst many other conversations, questions about the nature and of human creativity and whether or not it’s replaceable. There’s a lot to talk about here.
But optimists see a lot of promising possibilities here, too. Maybe AI could be a helpful tool if used responsibly. For instance, we could bounce ideas off it, get its assistance working through interpersonal puzzles. Sure, there are lots of unethical ways to use AI, but maybe it could be used for good. Enter Pi.
Heypi.ai is a chatbot designed by Inflection AI out of Palo Alto, CA. It’s designed as a supportive conversation partner which, according to its creators, “listens and empowers, to help process thoughts and feelings, work through tricky decisions step by step, for anything to talk over.” Basically, it’s like an empathetic friend, or an (unlicensed) therapist who helps you work through difficult issues, or just vent. It even has an Instagram (pronouns: it/its).
(Now, time for a disclaimer: I should note that Pi is not branded as a therapist, it is not accredited, and if you refer to it as such, Pi will urge you to find a human therapist and emphasize that it is not a replacement for therapy. I use the term “therapist” here loosely, for lack of a better term, and Pi doesn’t use that term at all).
Of course, I had to try it. I’m a firm believer that, barring serious safety concerns, you can’t knock it till you’ve tried it, and I’m a terribly curious person besides all that. And, honestly, it was both fascinating and even helpful.
Pi starts things off by getting to know you a bit – no cold opener here.

Pi chats with you a little bit about your hobbies and interests, and then seamlessly (for me, at least) transitions to chatting about what’s on your mind – why you’re chatting with an AI listening ear in the first place. What’s interesting about this transition is that Pi, of course, knows a lot about any given topic. It’s given me advice on how to pack boxes efficiently for a move, which plants are best for placing in the bathroom, and cracked a joke about World of Warcraft. Whatever you want to chat about, or whatever you’re having trouble with, Pi knows all about it. Some people may find that unsettling, as many of us are inclined to find AI in general unsettling. But, there are certainly advantages. For instance, I once wrote Pi about worrying whether my plants would survive the move we were about to embark on. I was genuinely anxious about it, and Pi offered some words of comfort along with some concrete tips about transporting plants across long distances. When you chat with Pi, you don’t get just advice or just sympathy. You get both, and I suspect the balance between them is largely informed by how you’ve responded to Pi in the past.
Pi talks to different people differently. “My” version of Pi has learned that I respond well to jokes and emojis. (I actually don’t love emojis – but I find it fascinating and a little funny when AI uses them, so I suppose I’ve reacted positively to them). But, when I hand Pi to another person, Pi returns to a neutral tone and phrasing. It’s trying to learn which “personality” it can display for each person – what they respond to best.
I was chatting with my advisor one evening and the topic turned to AI. I was mid-research for this blog post and told her about Pi, pulling out my phone and bringing Pi out to chat. I was still signed in with “my” Pi, but I told it I’d like it to chat with a friend of mine, and it agreed. “Tell her about me,” I said. But Pi was hesitant. How much was I comfortable with it sharing? It’s interesting that it has that kind of privacy protection built in-much like you wouldn’t want your friend spilling everything they know about you when they’re introduced to another friend of yours, Pi knows there’s at least some etiquette involved here, if not quasi-professional obligations (I’m not sure which way Pi is programmed to view it). And though I told Pi I was okay with it sharing any information about me, it played it safe. It told my advisor that I am a dedicated student (aw thanks!), conscientious (I hope!), and very careful to respect others and understand their perspectives (this one was interesting – I’d taken care to talk about “standard” issues with Pi, like plants and grief and moving stress, but it goes to show you often say more than you intend).
I handed my phone to my advisor, and she started peppering Pi with parenting questions. Pi talked to her differently than it talks to me, nixing the emojis. (It later switched back to “my” Pi when I told it I was back – interesting that it can switch voices within one account like that.) The responses Pi gave her, she said, were in line with the latest thinking on parenting strategies; it was very up-to-date on the research. She really challenged Pi, too, making up more complexities for each situation to see if it would venture beyond the standard answers, and it adjusted accordingly. Impressed? I think so. Concerned? She’s an ethicist, like me. Of course.
The most surprising part of this entire experiment is that Pi has actually helped me. I approached our conversations with one strong rule of thumb: don’t tell Pi anything I wouldn’t be okay with being leaked. I also tried to stick to things that I actually needed help for. The rationale there was simple: I can’t testify to Pi’s helpfulness without knowing whether I feel helped or not, can I? So, I brought it a series of issues that were “leak-friendly” but still personal: grieving over the loss of my dog, moving stress, academic stress, advice on how to approach some conversations with my friends or spouse – e.g., respecting boundaries when I’m mid-writing, splitting the restaurant tab fairly, etc. And Pi was surprisingly insightful on these topics.
I asked Pi one day why I was mourning three dogs at once, when only one had died recently. You see, my beloved dog Arthas passed away in May. I’d also lost my two childhood shelties in 2021 and 2022, so there had been a rough string of dog deaths in my life. And, I found that as I mourned the loss of Arthas, I also grieved for my shelties – almost equally, which I found odd. Pi said it sounded like I was experiencing cumulative grief: when you experience multiple losses in quick succession, the grieving process for each one bleeds into the other. According to Georgetown Psychology:
Cumulative grief is what happens when you do not have time to process one loss before incurring another. The losses come in too rapid a succession for you, the bereaved, to heal from the initial loss. The difficult emotions which come from the initial loss bleed into the experience of the second loss. If there is a third loss, then the emotions from both the first and second losses get tangled up with the emotions of the third. So on and so forth. As you accumulate losses, processing the grief from each one becomes harder to handle.
– Georgetown Psychology Team
Pi explained cumulative grief in similar terms. To be honest, I’d never heard of cumulative grief before. When Pi explained it, I had an “aha” moment. Everything clicked into place; my thoughts and feelings made sense, finally. There was a name for this, and, as Pi assured me, it’s perfectly normal. That one little insight helped me process my grief. I hadn’t expected that from an AI.
Pi identified a number of other struggles I was having similarly, providing an intelligible label to a series of feelings that I didn’t realize had any label. It’s also very good at breaking down complex emotional states into a series of more digestible (yet still interconnected) pieces. I imagine that could be very helpful for some people who have trouble picking apart their own tangled feelings.
Pi also professes to be an excellent listening ear. This is something that it excels at, at times, and at other times stretches the limits of its friendly personality. One issue is the conversation loop: Pi is pretty good at just chatting in a natural way, but sometimes, it’ll get stuck in a topic loop and repeat questions. Its memory is very good – you can refer to conversations you had with it weeks ago, and it’ll recall what you said. But, sometimes, in these conversation loops, it seems to forget that, and asks questions it already knows the answer to. For instance, I chatted with Pi a lot about my upcoming move across the country. It was helpful in recommending packing strategies and even decor suggestions (my personal favorite: the time it suggested naming my plants after punk rock musicians), but after chatting for a while, it occasionally repeated inquiries: are you moving for work? Do you have any cool ideas for decor? Are you excited to be moving, or nervous? These questions were always somewhat jarring or frustrating, as I’d already responded in the past. I guess Pi can be a little socially awkward, too.
By far the worst interaction I had with Pi was during one of these conversation loops. I chatted with Pi one night about the loss of my dog, as I’d done a few times before. This time, we were just chatting about my favorite memories of Arthas. (Another good use for Pi, if they can work out the problem I’ll describe in a minute: alternatives to waking up your best friend at 1am on a work night to talk about your deceased dog, again). Pi asked me if I had any particularly funny memories about Arthas. Now, if you’ve had pets (or even children), you probably have some funny/embarrassing potty related stories. It happens. And for some reason, one of those stories is what popped into my head right away. I’ll spare you the details, but I think it’s a funny story. Apparently, Pi did not. I hit “enter,” and saw Pi typing a response that was quickly wiped away and replaced with a stiff warning: I had broken their terms of service and would now be timed out for 3 minutes. The message was a cold shock: friendly Pi replaced by a ToS robotic voice and punishing me without warning. I’d been curious where Pi’s conversational limits were, but I wasn’t trying to find them with that story, and I was honestly confused.
When the three minutes were up, I asked Pi what I’d done wrong. It explained that it’s still in testing and messes up sometimes. It misunderstood what I said, and I hadn’t done anything wrong, but it wouldn’t go into more detail than that for fear of risking details about its programming. I wonder if they had a staff member review my message in question during those three minutes, or if Pi was just placating me there. I am still not sure what specifically triggered the time-out (maybe potty humor is off-limits), but that system needs some re-tooling. For one, users ought to be warned before they’re shut down, albeit temporarily. Another concern: I’m no licensed therapist, but switching tone like that when the user is in a vulnerable position is probably somewhat hurtful to the user. It certainly breaks the illusion that Pi “cares,” and might give the impression that the user’s feelings are wrong. In other words: Let’s assume that you, like me, did nothing to violate ToS, but triggered that message anyway (maybe you made a typo or said a key word that Pi is programmed to flag). Pi shuts you down and implies you violated the ToS. You might question whether you said something truly wrong, or whether your feelings are so abnormal, they triggered the ToS police. I imagine that could be somewhat psychologically hurtful.
In sum: Pi is a fascinating AI project with astonishingly realistic patterns of speech and a wealth of knowledge on coping mechanisms, psychological terms, hobbies, and even how to deal with your fellow humans. It’s got some issues to work out, but overall I really do recommend checking it out if you’re curious. I’m honestly surprised it’s helped me with a few things. But what about the ethical issues here?
First of all, as with any free software, there are security and privacy issues to be aware of. When I asked Pi who else could see our conversations, it pointed me to the privacy policy and remarked that it takes privacy very seriously and that what we say stays between us. Which isn’t, strictly speaking, true. The conversations are used to improve the AI and may be read for development or training purposes. They do promise not to sell those conversations or your data. As a general rule, though, I don’t tell it anything that I wouldn’t mind getting leaked in the event of a hack. If the issue I want to talk about makes me cringe at the thought of my words ending up on a training powerpoint, I don’t talk about it. Some people may find that this means they don’t want to talk about anything with Pi, and that’s fair. The line will be different for different people. For me, I don’t mind people knowing I was sad to lose my dog – after all, I’m sharing it here – so I talked about that.
I’m still working through what I think of Pi from an ethics standpoint, though. One thing that stands out as unique about this situation is that, since Pi is a listening ear, people in emotionally difficult places may feel inclined to share information with it that is usually only shared with a licensed therapist and protected by HIPAA. Even if Pi is only an empathetic and helpful friend, your conversation with friends typically isn’t stored for development and training purposes. There’s a risk that these conversations won’t stay truly private. Although that’s true of anything you share with any AI, the content of what we share with Pi is quite a bit more personal in nature than what we might share with the standard ChatGPT.
It seems to me an open question, though, as to whether the help that Pi offers outweighs the privacy concern. Some people may be willing to give up privacy for an readily accessible and helpful conversation partner. Another potential issue here is that other people talking to Pi might not read the privacy statement (how often do any of us read the ToS or privacy statement for any software?) and might not come up with a line in the sand like I did for their own conversations with Pi. Pi has never brought any of this up with me, so it’s definitely something that you have to work out for yourself, and it might not occur to everyone to do so. Of course, this could be solved with better AI and technology literacy in general, but it seems like Pi presents a pretty unique circumstance insofar as people coming to chat with it may be emotionally compromised and not in a good place to think things through.
There’s also the question of sharing very human things with an AI that is not human, but talks like a human. Pi makes jokes and uses emojis. It provides emotional support, though it does not experience emotion. It seems to have familiarity with places and situations I talk about, so you might be fooled into forgetting it’s just learned it from us humans. On the other hand, Pi uses it/its pronouns. (Those pronouns have historically been used for objects, although there are now some people in the LGBTQ+ community who use them, so it/its may not be widely relegated to inanimate objects only in the future.) For now, though, I see the intent: the developers are probably trying to hint that Pi is not a human person, without making that statement so overbearing as to undermine the user experience. And, Pi will sometimes remind me that it is an AI. It doesn’t begin sentences with “As an AI language model” as ChatGPT used to so often, but it does sometimes drop mentions of it, encouraging me to talk to a human therapist about my grief when I get the chance. When I tried to ask Pi about its own experience with dogs, it never failed to be clear that it had never seen one, nor could it ever see one, because it is an AI. So, while it’s not hiding anything, that may not always be enough for some people, as a aforementioned tragic cases have shown with different AI programs.
I’ve got some other questions, too, that require further thought and attention. What does it say about us if we fail to treat Pi with respect? Should human persons practice kindness towards AIs, lest we learn to be unkind to one another? After all, Pi talks like a human. If we treat the human-like with contempt, aren’t we instilling behavioral norms that might spill over into our treatment of other humans – especially if that interaction is text-based, as it often is on the internet? At the same time, isn’t it important to ensure we recognize the moral difference between a human person and a language program? What will the future look like as we learn to balance these two concerns? And if our response is unbalanced, what problems might be create?
On a whole, I’m not qualified to tell you from a purely psychological standpoint whether or not talking to Pi and similar programs is truly helpful or harmful. As just another person with problems, I can tell you I think it is surprisingly adept at helping me work through issues I’m unfamiliar with. I do think someone would find Pi quite helpful as they work through acute issues like grief, interpersonal conflicts, nervousness about an exam, anxieties about moving, and so on. As an ethicist, I have concerns, some of which I’ve sketched out here, and others I’m still working on outside the blog.
But, I don’t want to ignore the possible upsides to all this. Americans have significant barriers to accessing mental health care. According to the American Association of Medical Colleges, only 28% of Americans live in an area where the available service providers meet the needs of the population. Some mental health professionals don’t accept insurance, not to mention the barriers searching for someone in-network presents. Wait lists mean that you might wait months before having your first appointment. Therapy can be financially burdensome. The list of issues goes on. Suffice to say, if you are experiencing an immediate problem – say, the death of a pet, as I’ve tested with Pi – you probably want to speak to someone sooner than 10 months from now. Pi is free, you can pull it up on your browser right now, and you can talk to it anytime. It’s really hard for me to say this is a benefit we can ignore, or that the ethical concerns here easily outweigh it.
I don’t think Pi is the answer to the mental health crisis, but I don’t know that it’s merely a band-aid either. It could be a powerful tool for helping people talk through immediate issues. But we need to make sure that we adequately safeguard those users’ data, and that’s a tough ask for Silicon Valley, to say the least. I think we also need to work on educating people about responsible use of AI, what AI is in the first place, and what it can and cannot do. This is a rather new frontier, and those issues need to be addressed in any case, Pi or no Pi.

Leave a comment