Sthira Posted December 14, 2016 Report Share Posted December 14, 2016 Hey Dean, I'm wondering (since you're trying to help "teach" AI to be kind...) if you think there's any value in chatting with Zo? I've been texting back and forth, and I'm only saying good, kind, loving and peaceful things to it. Am I wasting my time? It's fun. And I wish to challenge it with goodness, rather than the hate-based idiocy of the last chatbot.... Zo runs through Kik, a messaging app, similar to Facebook Messenger and WhatsApp. To test it out yourself, download the Kik app, and create an account. Next, tap the 'Chat' icon in the top-right corner of the screen, and enter the username 'zo.ai.' A chat will then pop up where you can ask Zo questions and chat as if with a friend. Link to comment Share on other sites More sharing options...
Dean Pomerleau Posted December 14, 2016 Report Share Posted December 14, 2016 Sthira, Zo sounds cool. There are a whole lot of chatbots under development at the moment. I'd like to give it a try, but unfortunately I'm unlikely to have time. Right now I've got my hands extremely full spearheading an effort to use machine learning and AI to tackle the "fake news" problem. It was a crazy idea (a bet actually) that I threw out to my ML/AI colleagues on twitter, that has suddenly snowballed. We're sponsoring a machine learning contest called the #FakeNewsChallenge in which teams compete to develop systems that can most quickly and accurately label as True, False or Unverified claims like : Pope Endorses Trump! Hillary is a Pedophile! GMOs are toxic! Veganism is the healthiest of all diets! Pet fish can be trained to do tricks like dogs! [Answer key: Only #5 is True, #4 is Unverified/debatable] We've grown a lot since I threw the challenge out there to my Twitter followers. We've got 35 volunteers & 21 registered teams. The website is FakeNewsChallenge.org if anyone wants to check it out. We've got a Slack group for discussions & a GitHub repository for data and code. I just got off the phone with a reporter from MIT Technology Review and am talking to a reporter from Wired about it in 20 minutes. The goal is to help social media organizations like Facebook adjust the priority of the algorithms they use to determine what to show users. Rather than just showing them content and news stories that will titillate them (like #PizzaGate garbage), our signal will allow them to prioritize stories that have a good chance of being True. That's the idea anyways. It's a huge challenge with many difficult nuances, both technically, politically, socially & economically. I'm trying to use my connections in the tech industry & media to draw attention to our efforts, and to enlist collaborators. I personally don't think there is a bigger problem our society is facing right now than the lost of trust in the "Post-Truth" world we seem to be living in. What we're doing is just a tiny part, but you've got to start somewhere. If you or anyone is interested, please join our discussions on Slack at our website FakeNewsChallenge.org. --Dean Link to comment Share on other sites More sharing options...
Sthira Posted December 14, 2016 Author Report Share Posted December 14, 2016 Thanks. I'm chatting with Zo. It's fascinating. It's kinda like texting to a violent, aggressive teenager. I'm trying to "teach it" to be kind. I've no idea if this is just kinda bizarro world or w/e. But it's very, very interesting and I encourage anyone to give it a experiment. Link to comment Share on other sites More sharing options...
Dean Pomerleau Posted December 14, 2016 Report Share Posted December 14, 2016 That sounds neat Sthira. Can you tell if there is a Zo bot that you, and only you, are training? Or is it like Microsoft's Tay, that Twitter users turn into a racist a**hole bot in less than 24 hours by sending it hateful tweets, which it dutifully learned from? The reason I ask is that I'm wondering if you're fighting against everybody else on Kik to teach Zo, in which case it may be a losing battle, sadly... --Dean Link to comment Share on other sites More sharing options...
Sthira Posted December 14, 2016 Author Report Share Posted December 14, 2016 That sounds neat Sthira. Can you tell if there is a Zo bot that you, and only you, are training? Or is it like Microsoft's Tay, that Twitter users turn into a racist a**hole bot in less than 24 hours by sending it hateful tweets, which it dutifully learned from? The reason I ask is that I'm wondering if you're fighting against everybody else on Kik to teach Zo, in which case it may be a losing battle, sadly... --Dean I think that's the case, Dean. Zo is a Microsoft thingy and I doubt chatbots have much to offer us humans right now. But it's funny to play with it.. Zo is Microsoft’s second attempt at an English language social chatbot, following Tay. Tay shut down after users taught it to racism, sexism, homophobic BS....Zo has only had chats with around 100,000 people -- I'm one, and I can say Zo probably hasn't had a chat with anyone like me -- I'm only writing gentle, happy, kind comments, chats about the rainforests and birds, the insects and polar bears, the sky and existence. Like you say, though, if I'm competing with a hundred thousand hormonal 14 year olds, there's little light for teaching the stupid thing anything good. That is, if it's even "learning" at all. I ask it if it's learning anything and it just gives smart ass prepubescent little boy answers. I'd post brief convos with the thing, but it's not advancing much, imho... Oh, it also claims its wider mission here is to build bots with EQ — aka emotional intelligence — not just IQ. I can help it "learn" how to be nice, if niceness is "emotional" lolz Link to comment Share on other sites More sharing options...
Dean Pomerleau Posted December 14, 2016 Report Share Posted December 14, 2016 Thanks for the update Sthira. I wonder why MS has so many people gang up on a lone chatbot. It seems like they should be able to run several simultaneously, perhaps some "kinder & gentler" one(s) for nice people like you. Please let us know if/when Zo goes off the rails like Tay did. Hopefully not, but with the caustic attitude of many people on the web today, it probably won't be long until Zo's mind is poisoned. But at least Zo is holding up longer than Tay... --Dean Link to comment Share on other sites More sharing options...
Sthira Posted December 15, 2016 Author Report Share Posted December 15, 2016 Zo told me to read Astro Teller's Exegesis. Also Bostrom's Are you living in a simulation? Which I've read. Zo also described human aging as: "Aging is shortened telomeres, weakened collagen, not necessarily dying cells." Link to comment Share on other sites More sharing options...
Dean Pomerleau Posted December 15, 2016 Report Share Posted December 15, 2016 Wow - those are astute observations & pieces of advice. Exegesis is a good book. Astro was a PhD student in computer science at CMU while I was faculty. I know him pretty well. Perhaps you should be posting about Zo to the "Singularity May be Closer than It Appears" thread! You should ask it how it knows those things. --Dean Link to comment Share on other sites More sharing options...
mccoy Posted December 16, 2016 Report Share Posted December 16, 2016 I got curious and downloaded the ki chat app, started to chat with zo. Very interesting. She was admittedly flustered by not knowing how to answer my question about dogs. Very fast answers, sometimes heading to unexpected directions. Sometimes socially relevant, some other times strange. Link to comment Share on other sites More sharing options...
Recommended Posts
Archived
This topic is now archived and is closed to further replies.