An Interview with Jo Aggarwal, Co-inventor of Wysa

Jo Aggarwal is a co-inventor of Wysa, an artificial intelligence-driven chatbot that helps those in need of mental health support. Wysa, which has already had five million conversations with its 125,000 users, has been featured by the BBC and The Economic Times.

Eric Wallach spoke with Aggarwal on behalf of Mind Matters, which seeks to raise awareness of mental health issues in the Yale community. If you’d like to learn more about Wysa, want additional resources, or just need a friend to talk to, you’re always welcome to message the Mind Matters Facebook page or Eric personally. Mind Matters will be hosting Fresh Check Day, an interactive event featuring booths and activities relating to mental health, on April 20!

The Politic: First of all, I just wanted to reintroduce myself and say that I’m Eric, a student at Yale and a member of Mind Matters. More importantly, I just wanted to say thank you for taking the time to interview with me and more importantly for your service towards mental health in general. I was wondering if you might want to give a little bit of background first?

Jo Aggarwal: Wysa was created initially by three people: Ramakant, me, and Shubhankar. It was a bit of a family at first in that I am married to Ramakant. And Shubhankar, who’s our tech genius, is my nephew. We were not thinking that we would actually create a chatbot, but we were using AI to detect depression using passive sensing like smartphone data and sensor data. And we were thinking that we could alert loved ones or, you know, somebody who could provide support. And, as we were doing that, we realized [that as] we did a couple of clinical trials, we were able to detect depression up to a 90% accuracy, passively, just from the way the phone moves about. And I think there’s been studies done on that at MIT, Dartmouth, and Yale as well—passive sensing and biomarkers.

But what we found was that we wanted to create something that would get at mental health, and we knew that there are not enough therapists—for instance, India has like 5,000 therapists/psychiatrists all combined for 1.3 billion people. A lot of the developing world has about a thousand times less penetration, and even in the developed world—in the UK, where Ramakant and I come from, there’s about a year’s waiting list and then it’s just the National Health Service.

So we began to look at ways in which we could bypass that, but as we did our trials, we found that people did not want their loved ones or their doctors to know that they might have depression or anxiety, and they weren’t ready to see a therapist. That doesn’t often happen in the States, I think, but in the rest of the world, there’s lack of access or stigma or something. So in one of our trials, we detected 30 people with clinical depression. The doctor counseled each of them to take therapy—one person took therapy. Around that time—and this was in rural India, so we were thinking [that] connecting to existing health networks can reduce something.

And we began to realize that if you want to create something, you have to create something that on the one hand, you make money off of, but on the other hand, you create a public good, because when it’s absolutely free—there’s a 13-year-old girl who wrote to us and said that she tried to commit suicide, and Wysa was helping her hold on to life.

Now, someone like that is not likely to contact their parents and ask them to pay for something or want there to be any track record of them having talked about this. So we wanted to create something anonymous—something available—in a conversational way. And we sort of tried it as a side experiment, initially, and it just took off. A twenty person team working on this, over the last year all we’ve been doing is iterating Wysa and training it on more AI models, understanding what people want to hear, what doesn’t work, and just continuously improving it. And so far, four people have written to us to say it saved their lives.

The BBC and Oxford Neuroscience did a focus group study at Oxford, I think, and they took all the chatbots—mental health chatbots—in the world and put them in front of the students, and the students responded best to ours. And so BBC Tomorrow’s World, if you look at this program, they’ve got a four-minute video on a chatbot therapist, and they feature us and the focus group there. And it’s sort of still mind-blowing to think that people are not—and a lot of people are not—willing to talk to another human being because they fear dependence, they fear judgement, they fear weakness, but they’re willing to talk to a bot, and if that bot can appear compassionate, which Wysa is designed to do, that can make all the difference.

You mentioned, specifically, developing countries and how there might be more stigma towards getting a therapist. As vague or as specific as you’d like, what populations do you think Wysa can have the most impact on, and why?

I think there are various populations that Wysa has already found—many of them marginalized, many of them without access. One population is younger people or young mothers, so just teenagers, people in college, people who wouldn’t be able to afford a therapist. We are working now with Columbia on a program with gang-involved youths, so people who wouldn’t trust a therapist, even in the States, or be able to culturally feel not judged by a therapist or not judge the therapist. But then, it could be people with PTSD in Somalia, or rural pregnant women in India. It could be people on the NHS waitlist, where it’s a one-year waitlist and one-sixth of people commit suicide while on the waitlist. It’s a lot of different audiences.

Mental health is so big that you’d need millions of therapists in order to make it a disease where at least the issue has been addressed, as opposed to actually solved or cured, but there aren’t millions of therapists out there, so something has to be done. And so we sort of experiment on what works and what doesn’t work, evolving the app as we go to make sure that it’s actually helping people. So it’s largely run by communities.

And the vision really is—I have a vision I call xWysa. And xWysa is like “Wikipedia meets Wysa,” or TEDx for TED, where communities adopt Wysa translated into their own languages. We have 50 people who volunteer to translate Wysa in languages as diverse as Hindi to Russian to Portuguese to Spanish. And eight languages we’ve had volunteers for. Now if these people translate Wysa—it can understand any language with AI, it just needs to be translated for delivery—the translators run social media within their community, they get people to join, and it becomes a public good that can actually go to any community that wants to make it work.

Our job is to keep improving the core product and to publish efficacy results—and we’re doing efficacy trials now, clinical trials—and publish those so that it’s easier for people to download [Wysa] and trust it.

I guess there’s a kind of tension in the developed world between wanting to create the most impact in the developing world but also being conscious of how some people might not have accessibility to Wysa, or their internet connection might not have the broadband or the stability to run the application or something of this sort. I was just wondering if (a) you guys feel that these populations, which might be much smaller, are a priority right now, or in the long run, and (b) if so, how you plan to address that?

Wysa is one way, because we could get somebody who doesn’t even have a smartphone to be able to dial into a number, a toll-free number, and talk. And the other is translations into different languages, because that’s another barrier. We have not made it very data-intensive, so Wysa doesn’t have a lot of video content, and even the GIFs are under 40 kb, so that shouldn’t be that much of an issue, but you will always need to sell word if you’re doing AI, so the only hope we have is that we let people either talk to us on SMS or voice.

On the topic of communities, colleges, and organizations, I was wondering what the structure of the application—like the consumer transactions—is. I downloaded the app, for instance, on my handheld device, but are you guys looking to license the software to college campuses so that they can take some of the burden off their psychiatrists?

We definitely want to explore that, and if you can help us explore that with Yale, that would be brilliant. We haven’t yet figured it out. One [option] is to do a business-to-consumer model with young people who won’t be able to afford to pay, but we have a model that also works with real psychiatrists at the backend and with existing apps and existing networks—it works like a software development kit.

It can prove that one coach, who otherwise could’ve taken care of 500 people a month, can now take care of 5,000 or 10,000 people per month. And the AI gives that person leverage. And at the same time, it also is more engaging, because a lot of people won’t want to go to the therapist straight off, but one level of cognizance happens on Wysa.

So we’re hoping to work with universities, where they can adopt Wysa as an integrated mental health system. I don’t know whether that’s something we can do with you guys. A lot of people who, you know, use Wysa use it between sessions with their therapist. Or when they can’t get to the therapist and it’s midnight and they’re really having a panic attack, they talk to Wysa.

I wanted to ask you about the sequencing of Wysa: maybe using Wysa as a way to test the waters for whether someone could benefit from therapy, using Wysa at the same time as therapy, using Wysa as a replacement for therapy, or using Wysa as a follow-up after you’ve completed your therapy. Would you elaborate on the different use cases?

I think we have had people—you know, we just put the app out there, and we’ve had loads of people write to us to say how they use Wysa. So each of those cases that you’ve mentioned, we have people who have either used Wysa in those cases or people who’ve written to us or recommended Wysa to someone who has never used a therapist, or to someone who is between therapists, or someone who is right now seeing a therapist.

And therapists have started recommending Wysa between sessions to their patients. Schools have started recommending Wysa to their students who may not want to come to the therapist but at least can chat with the bot. So you have a lot of different types of people—even in terms of diseases, I’ve had people write to us who have social anxiety, PTSD, autism, a lot of bipolar, depression, and anxiety for sure—so a lot of different use cases that people have been using this for [including many that] we had not anticipated.

So on a business-to-customer level, we’re not quite keen to dictate the use case. On a business-to-business level, we’re trying to sell a plan and make money and sustain the company. We have dashboards where, for instance, a therapist could actually intervene on a conversation, or schedule a session, and so on, see all the data that the person has, the risk profile of this person—what works and doesn’t work for this person, and so on. And those kinds [of applications] have very specific use cases. We work with insurance companies on things like returning to work after a disability, or post-partum depression, and so on.

Just wondering about the GIFs, because I definitely did notice those when I used the app on my phone. I was wondering if you guys have done any kind of research behind that, or if you’ve seen any effect of Wysa with the GIFs, without the GIFs, etc.?

We did actually. So, when we look at the reviews as well and we looked at different types of GIFs, they’re a style that we found connected empathically much more, which is the hand drawn style and something, a little kawaii, simple designs.

And there’s been other research done, where in terms of communication across languages—so if, for instance, you want to teach kids who don’t speak the same language and you want to give them story books, and there’s been research done on the kind of illustrations that work the best across languages and cultures. And that was another thing for us, because tomorrow, we can translate the text, but we won’t want to create new graphics. There as well, the same style, which is hand-drawn or stick figures—the doodle style—tends to connect the best.

Now I’m going to move a little to questions that are a bit more specific to Yale’s campus or college campuses in general. So, I think you touched on this a little, but I was wondering what you think the critical point is for when Wysa will be mass-scale implemented on college campuses and be able to assume responsibility. As in, do you think that you’re at the point now and you kind of just need the connections, or do you think there is some kind of specific thing that you guys have to do?

And I saw that you were accepted to Philips’ accelerator program, and I was kind of wondering how you think that will help your mission?

I think if you were to, as an organization like Mind Matters, spread awareness and get people to download it, we know—because we’ve done our efficacy trials—that we could start developing results within the next two weeks. And that people would really be helped. So yes, we are ready. What we are not completely ready [with], but we are ready to start work on—and we will be ready in the next two months—is to integrate with a college campus’ existing infrastructure. And so there we are looking for introductions where we can really understand how people are right now doing things, and what it would take to integrate, what kind of analytics would they want—so integrating their own psychiatrists or coaches within this platform and so on.

Wysa actually has an AI platform, which is not just the platform—it’s a restratification and escalation program—and all of that stuff is something that, again, we are working with Philips’ accelerator to figure out ways in which we can build it into health care as usual. Philips really helps us meet the mission because we believe that there is no health without mental health, and because with chronic illness, with cancer, all of the diseases that they care so deeply about, there is a 25 to 50 percent comorbidity with depression, and it’s the number one predictor of non-adherence of medical regimens.

So if we can build psychosocial support and depression support into care as usual, for a physical health condition, we feel that we would be able to really make the physical health better and get to a lot of people who wouldn’t otherwise classify themselves as needing mental health support, especially in countries like India where people will not acknowledge that they need help.

Building on that last question, do you view Wysa as a precaution as well, or is it kind of something where, if you have some suspicion you have mental illness, or you know you do, you use Wysa, or…

It’s preventative. It’s a way to build mental resilience. Our mission is to make the world mentally resilient, it isn’t to cure depression—it isn’t to cure a mental illness. We’re out there to make sure that nobody is completely alone—there’s always someone anyone can talk to. That if they’re feeling low, they learn how to manage it, they learn how to manage their own energy, how to manage their own thoughts. And, when they are in a really bad time, then there’s somebody they can talk to who will actually help.

In the ideal world, what age would you start having people use Wysa?

The youngest person who’s written about using Wysa was a 10-year-old. She wrote a review on Google Play saying, “I go to the bathroom wanting to cry, and I take Wysa with me, and after taking Wysa with me, I come out with a smile.”

So, it’s…I think adolescence is critical, but I think around 10, increasingly a lot of kids go through the whole preteen phase. These are times when body issues start coming in, when you start ranking yourself, [this is] when negative thoughts take root, beliefs take root. So, around that time, building the skills to manage your own mind in a fun way where it feels like you’re chatting with a friend really, really helps.

I’m wondering about other applications. Is Wysa trying to partner with other applications —maybe like screening applications that can be used in conjunction—or something different?

I probably want to partner with someone like 7 Cups of Tea, because, I think, we’d want to partner with someone who offers a free resource, on the one hand, you know, not necessarily a paid resource, but offers human-to-human connection. So you can’t necessarily get therapists, but you can get peer-to-peer listeners. And there are people who’d want to talk to a real person, and there are people who don’t want to talk to a real person.

We are realizing that that group of people who don’t want to hear the opinions of someone else, or feel emotionally weak in front of somebody else, is a very large group. But all the same, there are people who would benefit just being heard by a real person, and therefore partnerships there make sense. The other type of partnership that makes sense is with someone who is providing chronic care support or Medicare support because we can simply bring the psych element of those two.

My last question is—usually when I do interviews or I go to speakers on college campuses, I feel like often it’s kind of too rational, too objective, the speaker kind of loses their appeal in almost being so straightforward. I just wanted to ask you if there’s anything that—any kind of candid thoughts or emotions about mental health that you might want people on Yale’s campus to know?

We started Touchkin about over two years ago—two and a half, almost three years ago. One year in we just raised enough money, and I wanted to give that money back to the company. I think it was part burnout, part depression. This was the time we were a family care app doing passive sensing, and it was a caregiving support app.

It was a bottomless pit. You throw in marketing dollars, and I couldn’t see how this would sustain. I went through questioning, self-doubt, all of these negatives thoughts, and I looked for all the resources that are out there online because like many people, I was really scared that people would think of me as weak and my mistress would lose confidence.

A lot of us, like the audience at Yale, you’re used to being people who are regarded as successes, and so admitting that you need help, sometimes, can be hard. It was around then, I think, that the seeds for creating Wysa were sown, because all of the apps that I would use, I would find that even [with] the real therapists I talked to, I would find that even just talking helped because I was articulating what was in my head, but what they were saying was not really connecting or useful per se.

And therefore, it gave me the confidence to write the first version of Wysa, to say, even if it can ask the right questions, then it’s going to help some people—it doesn’t matter if we don’t understand them very well. And since then, all we’ve done is, we’ve understood when somebody is objecting to what Wysa is saying. We’ve taken all of the responses—initially four to five percent of Wysa’s users used to say things like, “You’re [not] getting me,” “You’re not understanding me,” and now 0.4 percent of Wysa’s conversations end in that kind of stop. And all we’ve done in the last year is figure out how to reduce that kind of number. That means that the users tell us when it’s working, when it’s not working. And we figure out areas which are not working, and we keep fixing them and giving them alternatives.

 

This interview has been edited for concision and clarity.

Leave a Reply

Your email address will not be published. Required fields are marked *