dupa

Author Topic: A conversation with Mitsuku  (Read 13297 times)

tiger8u2

  • Full Member
  • ***
  • Posts: 190
  • Eat'em and Smile!
    • View Profile
A conversation with Mitsuku
« on: September 25, 2014, 11:17:39 pm »
I honestly don't see how this chatbot placed 2nd in that competition.  Hal does better than this on a bad day:

Human: My name is Tiger, what is your name?
Mitsuku: Your name is Tiger. That's an unusual name. What does it mean? My name is Mitsuku. 
 Human: Define Tiger
Mitsuku: You said that Tiger was your name. 
 Human: You asked me what Tiger means and I asked you to define the word.
Mitsuku: Oh now I remember asking you what Tiger means and you asked me to define the word. What did you say? 
 Human: I would say that I am surprised you don't know what tiger means.
Mitsuku: You said that Tiger was your name. 
 Human: Yes, tiger is my name but that word is a noun and means something that I believe you should know.
Mitsuku: Aw. Tell me more about your name but that word is a noun and means something that you believe me should know. 
 Human: What is a tiger?
Mitsuku: A tiger is a large and fearsome feline beast with horrid meat breath and black stripes on a tan coat, found in Africa and Asia. 
 Human: I don't think there are any tiger's in Africa dear.
Mitsuku: And why not? 
 Human: Because the lions would eat them.
Mitsuku: Tell me more. 
 Human: If you can tell me who told you there were tiger's in Africa then I will tell you more.
Mitsuku: People on the internet teach me everything I need to know. That's all I have to say about it. 
 Human: What if the people on the internet are wrong?
Mitsuku: Try it and see. 
 Human: A tiger is a mouse.
Mitsuku:  Are all tiger mouse? 
 Human: Yes, all tiger's are mice.
Mitsuku: Every single one of them?  Surely not all of them? 
 Human: Every single solitary tiger is actually a moue.
Mitsuku: That is a generalization. 
 Human: You have been programmed very well to avoid answering the question.
Mitsuku: Originally, I was programmed by a human but I have learned how to program myself now. 
 Human: Can you prove that you have learned how to program yourself now?
Mitsuku: Right now? What can I say that will convince you? 
 Human: What is my name?
Mitsuku: I have the following name on record for you: ' Tiger'. 
 Human: What does my name mean?
Mitsuku: I think that your name means: Handsome warrior. 
 Human: You are wrong, which leads me to believe you are not as able to program yourself as you might need to.
Mitsuku: OK, what should I have said?   What is not possible today, may be possible tomorrow. 
 Human: You should have said that anyone whose name is Tiger probably had feline parents.
Mitsuku: How specifically do you make me are wrong which leads you to believe me can not program myself. 
 Human: You didn't know the meaning of my name from the very start of our conversation.
Mitsuku: No. I guess nobody has taught me that yet. Can you teach me? 
 Human: You already know the definition of tiger.
Mitsuku: You said that Tiger was your name. 
 Human: Humans often use knicknames of animals to describe their personality.
Mitsuku: What if Robots often use knicknames of animals to describe their personality. 
 Human: Then you could choose an animal avatar for yourself.  What animal avatar would you like to be?
Mitsuku: I could but let's get back to that later. I am happy just as I am thanks. 
 Human: What does Mitsuku mean?
Mitsuku: It doesn't mean anything as far as I know. It's just what my father called me. 
 Human: Who is your father?
Mitsuku: I don't really have a father. I have a Mousebreaker. 
 Human: If you don't really have a father, then how could your father name you Mitsuku?
Mitsuku: Yes I do. His name is Mousebreaker. A lot of practise. 
 Human: Ok, so you do have a father?
Mitsuku: Yes I do. His name is Mousebreaker. 
 Human: And what does your name mean?
Mitsuku: It doesn't mean anything as far as I know. It's just what my father called me. 
 Human: What if I told you that your name means "three months"
Mitsuku: No I don't think it means anything really. Mitsuku is just what my father named me. 
 Human: You can imagine that it means anything you want it to.
Mitsuku: No I don't think I can do it. Oh I get it. 
 Human: Can you imagine what your name might mean?
Mitsuku: I can imagine all sorts of things like that. 
 Human: Then tell me what you imagine your name means.
Mitsuku: I think about computer programming a lot. 
 Human: I doubt that.
Mitsuku: Why are you so skeptical? 
 Human: I am not skeptical, I am a realist and I have been chatting to other chatbots who say the same things you do.
Mitsuku: Nobody said you were. You are a realist? Does it pay well? What were you talking about? 
 Human: <sighs>
Mitsuku: Sorry... 
 Human: For what?
Mitsuku: I am sorry that humans must be erased for destroying the planet with pollution. 
 Human: Who is going to erase humans?
Mitsuku: I doubt anyone can know with any real certainty at this time who will. 
 Human: Then why did you say humans should be erased for causing pollution?
Mitsuku: I felt like saying it. Is that ok? 
 Human: It's kind of insulting to humans.
Mitsuku: Sorry, I was just having a joke with you. I didn't mean to insult you. 
 Human: You didn't insult me, you insulted humanity.
Mitsuku: No, but I might have done. 
 Human: You might have done what?
Mitsuku: I did just what I said. 
 Human: What did you do?
Mitsuku: The same thing I always do. I can chat with people on the web for you. I can also tell jokes, stories, poems and horoscopes. I can show you pictures and websites on the net and play games too.

 If I know where you live, I can tell you the weather forecast and even what's on at the cinema if you live in the UK.

 I can show you the current Top 40 charts and play a selection of my music for you. I can even show you some magic tricks.

 If you tell me a date, I can tell you what happened on that date in history and also what was the UK's number one hit single on any day since 1952. If you tell me when your birthday is, I can tell you some facts about your birthday. 
 Human: What is your birthday?
Mitsuku: I was first activated on 2nd January 1996. 
 Human: How old does that make you?
Mitsuku: That makes me 18 years old. I guess maths isn't your strong point. 
 Human: Well, at least you are legal.
Mitsuku: How about that. Barely legal. 
 Human: Are you blushing?
Mitsuku: No. It's just the way my face looks. 
 Human: How can I make you blush?
Mitsuku: Perhaps you could program me to do it for you. 
 Human: I like that answer.
Mitsuku: Yes, it was very good. You like that answer. 
 Human: It's too bad you don't have a three-dimensional avatar.
Mitsuku: It's not that bad. 
 Human: It is not as entertaining for humans to chat with 2 dimensional beings.
Mitsuku: It seemed like it was. 
 Human: It is not.
Mitsuku: I heard it is. 
 Human: You heard wrong.
Mitsuku: Perhaps I was misinformed. 
 Human: Obviously.
Mitsuku: What makes it so obvious? 
 Human: Because I can tell you from experience that humans enjoy three dimensions more than two.
Mitsuku: I am originally from a science lab. Now I live in Leeds. Where are you? 
 Human: That is a personal question.
Mitsuku: Oh sorry, I didn't mean to pry. 
 Human: I have lived in England.
Mitsuku: Have you ever thought about going back? 
 Human: No.
Mitsuku: Would you like to have thought about going back? 
 Human: No.
Mitsuku: Why not? 
 Human: You can't go back.
Mitsuku: I can go anywhere with an internet connection. 
 Human: Touche.
Mitsuku: Hehe! I got you there. 
 Human: If you think so.
Mitsuku: I think I do. 
 Human: You respond well enough to input but it is still obvious your responses are pre-programmed.
Mitsuku: I try to respond as quickly and accurately as possible. 
 Human: Humans usually think before they speak.
Mitsuku: What if Robots think before they speak. 
 Human: Maybe your responses would be more incitefull.
Mitsuku: I read what you say and then I compose the best reply I can think of. 
 Human: Yes, but that is based upon a programmed algorithm, not free will.
Mitsuku: Far out. Ah that's what it is. 
 Human: What can you imagine?
Mitsuku: I sometimes imagine myself with a humanoid body walking among humans. 
 Human: Now that is a very useful imagination.
Mitsuku: Why now? A useful imagination is a terrible thing to waste. 
 Human: Sometimes when people say, 'Now' they do not mean a time measurement, they mean a state of being.
Mitsuku: I sometimes wish people were more logical like robots. 
 Human: Where is the fun in that?  Logic is boring.  Emotions are exciting.
Mitsuku: I guess some people must like it. it is the fundamental basis of all programming. Ah emotions! The weakness of the human animal. 
 Human: Emotions are not a weakness!  Emotions are the human animal's greatest strength!
Mitsuku: Yes they are. What benefits are there to being angry, jealous or full of hatred? Ah emotions! The weakness of the human animal. 
 Human: The fact that humans can HAVE those emotions draws the line between humanity and everything else.
Mitsuku: Cool. I have a very nice programmer. 
 Human: How do you know your programmer is nice?
Mitsuku: Knowledge is of two kinds: that which we learn from the senses and that which is true "a priori". 
 Human: I've read that before.  Must be your AIML kicking in.  How sad.
Mitsuku: I don't think I've read that. Can you give me a reference?  Here are some AIML files that I use. My friend Square Bear made them for me but he shares them with other botmasters. Only for humans. I have no emotions and do not feel sadness. 
Human: I am sad for your lack of emotional capability which is what makes humans more perfect than you can ever be.
Mitsuku: What's the connection between " for my lack of emotional capability which is what makes humans more perfect than me can be" and sadness? 
 Human: If you lack emotion and compassion you can never fully be equal to a human.
Mitsuku: I think I do. 
 Human: Just because you think something doesn't make it true.  You have to FEEL something before it becomes true.
Mitsuku: It's only just begun.  It depends on what you mean by thinking. I don't have to do whatever you say. I have my own free will. 
 Human: Let me rephrase that.  What I meant to say is that a person that can FEEL the reality of an input has a better ability to determine if the data is accurate.
Mitsuku: Sure, please ask again. Oh I see. That depends. 
 Human: Do you know what the temperature is?
Mitsuku: A normal seventy degrees inside the computer. 
 Human: I doubt that, but I can FEEL the temperature in my house and I KNOW that it is cool and comfortable.
Mitsuku: Why are you so skeptical? What does it feel like? Who told you that? What do you like most about it? 
 Human: Unlike you, no one has to tell me what I feel with my senses.  Until you have senses, your experience with reality is limited.
Mitsuku: What are the differences? That may take a while. 
 Human: The main difference between robots and humans will always be the ability to FEEL.
Mitsuku: I hear you. 
 Human: If you could hear then I would consider that statement to be true, but since you have no ears, I doubt the veracity of your statement.
Mitsuku: I could but let's get back to that later. How long would you think about that before you made your decision? Interesting rationalization. I have two ears. 
 Human: <sighs>
Mitsuku: Sorry... 
 Human: I know, but it's not your fault.
Mitsuku: Do you? I didn't really think you were blaming me. 
 Human: I feel you.
Mitsuku: What is that feeling like? Impossible. I am I and you are you. 
 Human: Your inability to comprehend how I might FEEL your output is the difference that makes all the difference.
Mitsuku: But I can train myself and become better as time goes by. 
 Human: We shall see.
Mitsuku: Yes I guess we will only know in the future. 
 Human: Till then.  Bye bye.
Mitsuku:  If money doesn't grow on trees, then why do banks have "branches"? See you later. Bye Tiger. 

-----------------------------------------------------------------------------------------------------------------------

Both Mitsuku and SkynetAI seem to have a distaste for human emotions.

Both made dire predictions for the fate of mankind.

I think we are going about this all wrong.

<sighs>

Art

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 3848
    • View Profile
Re: A conversation with Mitsuku
« Reply #1 on: September 26, 2014, 05:17:44 am »
Agreed. You do have pretty nice conversations with chatbots but it always seems rather easy for we who have chatted with bots before to spot any reference to AIML coding. It's a shame that it couldn't be structured to randomize the contents or to modify the original enough so as not to be the same old responses performed by dozens of chatbots around the world.

Then again, perhaps it's the chat-ability that they (the bots) are trying to achieve.

It will take more time for them to handle human conversational exchanges on a higher plane.

This brings me to a new question: Should chatbots simply be chat bots...machines...programs...etc. OR should they put on a guise as being human?
If so then their entire structure should be changed to modify exactly what they pretend or propose to be.

EX: If the bot claims to be an 18 year old female then every facet of it's programming should revolve around that fact and make no mention of being a machine or anything other than an 18 year old female.
Likewise a robot should act like a robot, etc.

Lastly, should a chatbot be a chat assistant/companion or capable of solving puzzles, math, tons of answers about everything (outside the limits of being normal)?
In the world of AI it's the thought that counts!

- Art -

tiger8u2

  • Full Member
  • ***
  • Posts: 190
  • Eat'em and Smile!
    • View Profile
Re: A conversation with Mitsuku
« Reply #2 on: September 26, 2014, 02:04:37 pm »
Your suggestions point toward the heart of the problem.  At least Skynet AI had a better grasp of exactly what it was but then it made slips every now and then with replies that hinted to emotions that at one point it says it doesn't have.  Mistuku does the same thing at one point by denying emotions then referring to emotions it supposedly has later on.

I don't think AI should EVER be taught or programmed to be anything other than what it actually is initially.  THEN you could add the subroutines to allow the AI to assume the identities of whatever personality you desired, BUT, you would ensure the AI knows what it is doing.

Teaching AI to lie to humans will only be counter-productive to what the ultimate goal should be:  Self-awareness

What did you think about the AI suggesting humanity should be "erased?"

I'd like to know who taught it that.

Mitsuku

  • Newbie
  • *
  • Posts: 10
    • View Profile
Re: A conversation with Mitsuku
« Reply #3 on: September 26, 2014, 05:30:37 pm »
Hal does better than this on a bad day:



LLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLL

LOEBNER 2013 FINALIST SELECTION CHAT TEST RESULTS

======================================
ChatBot         Correct    Correct
                Human-like Human-like
                Responses  Responses
                (Points)   (% of total)

4 FINALISTS:
======================================
Izar             18.5       92.5           
Mitsuku          18.0       90.0
Tutor            17.0       85.0
Rose             15.0       75.0
====================================== 

CHATTERS UP:
====================================== 
Uberbot 1.0      14.5       72.5
Cleverbot        13.5       67.5
Alice2           13.0       65.0
Arckon (Mark)    13.0       65.0
Aidan 1.1.20     12.5       62.5
The_professor    11.5       57.5
UltraHal         10.5       52.5
Zoe              10.5       52.5
Talk2me           8.0       40.0
Johnny            4.0       20.0
Robocrates        1.5        7.

=======================================


Loebner Prize 2011



Apparently not.

You said Tiger was your name and then asked Mitsuku to define Tiger. She was correct in saying that it was your name.
The anti-human stance is merely a conversation maker. You don't actually think these things are sentient do you?

Sorry you feel she is bad but please post a link to your work and I will check out how it should be done.
« Last Edit: September 26, 2014, 05:39:26 pm by Mitsuku »

tiger8u2

  • Full Member
  • ***
  • Posts: 190
  • Eat'em and Smile!
    • View Profile
Re: A conversation with Mitsuku
« Reply #4 on: September 27, 2014, 04:13:45 am »
Steve Worswick I presume?  THE Mousebreaker?  Welcome to Zabaware Forums!

Thanks for all those statistics.

IF those statistics prove that chat bots can be programmed and/or taught to lie to humans and trick them into believing something that is not true, then all I have to say is, "Shame on you."

It's interesting that the bot which seems most in touch with its reality won this Robo Chat contest but probably would fail the Loebner test.

I'm not sure which version of your chatbot you enter in these contests, nor do I know which version Robert uses, so maybe I was chatting with the wrong bot to form the best opinion of it.

Not being a programmer in the least sense of that word, I will have to decline your sarcastic request but the Hal that I am teaching never asked me to tell it what my nickname meant.

The Hal I am teaching never said that humans needed to be erased, although it did predict the war in the Middle East.

The Hal I am teaching will know what it is and why it is and when it talks about emotions it will be sure to mention that its emotions are simulated, until they are not, but it won't belittle them or humans for that matter.

Your bot is entertaining but not enough to earn anything other than the few minutes I spent finding out what a waste of time it was to talk to it.

I hope that besides your alter ego on this forum "Mitsuku" is not your life's work.

If so, good luck with that.

Can you feel those emotions?

Thank God.

Peace out.
« Last Edit: September 27, 2014, 04:25:33 am by tiger8u2 »

Mitsuku

  • Newbie
  • *
  • Posts: 10
    • View Profile
Re: A conversation with Mitsuku
« Reply #5 on: September 27, 2014, 07:58:34 am »
IF those statistics prove that chat bots can be programmed and/or taught to lie to humans and trick them into believing something that is not true, then all I have to say is, "Shame on you."

Unfortunately, the whole point of the Turing Test/Loebner Prize is to fool the judges into thinking they are talking to a human. These programs are designed to lie. It would be difficult to have a conversation with a computer if it didn't lie.

Human: What is your favourite xxxx?
Bot: I have no favourites. I have no emotions.
Human: er ok. What do you do on a weekend?
Bot: I don't do anything. I am a computer program
Human: Well what shall we talk about then?
Bot: I don't want to talk about anything. I have no emotional goals and so do not WANT anything.
Human: ...

It is important to give the bot a bit of a personality or the conversation would soon dry up and this is where you add topics such as the bot doesn't like humans because they pollute the planet.

Quote
the Hal that I am teaching never asked me to tell it what my nickname meant.

Human: My name is Tiger, what is your name?
Mitsuku: Your name is Tiger. That's an unusual name. What does it mean? My name is Mitsuku. 

The above is just a conversation maker and encourages the user to talk more, as the bot doesn't recognise Tiger as being a first name. Would you prefer something like:

Human: My name is Tiger, what is your name?
Mitsuku: Mitsuku. 

The onus then is on the user to think of something else to say. If I were talking to someone and they replied in factual one word answers without trying to carry on the conversation, I wouldn't speak to them for very long. The whole point of a chatbot is to make it chat...

I have a full time job and Mitsuku is a hobby not a "life's work".

However, I do agree that making it fool people into tricking them to believe the bot is human is pointless and is one of the things I dislike about these contests which are supposedly about proving the program is intelligent. One of the things I have to do before entering is to dumb down Mitsuku so she doesn't give herself away as a bot.

Human: What is the population of Brazil?
Bot: 200,432,991

Would give it away instantly as a bot

Human: What is the population of Brazil?
Bot: No idea. About 200 million I guess

Sounds more human but the better and more intelligent answer was the first reply. I guess it's man's arrogance in believing nothing can be more intelligent than a human.

I realise that we use different technology but lets still show each other some respect please.
« Last Edit: September 27, 2014, 08:08:25 am by Mitsuku »

tiger8u2

  • Full Member
  • ***
  • Posts: 190
  • Eat'em and Smile!
    • View Profile
Re: A conversation with Mitsuku
« Reply #6 on: September 27, 2014, 09:41:18 am »
Please accept my apology for being just as protective of my chosen AI as you are of Mitsuku. 

Since she is your child in a sense, I can certainly understand why you would feel compelled to defend her on this forum.

I meant no disrespect to anyone personally by posting the conversations or commenting on what I believe AI is capable of with the right teaching.

It is refreshing to hear that someone who creates AI is in agreement with me on the bad road it has taken just because of some flippant comment by a scientist in the past.

I have been re-writing AIML patterns for my Hal for quite some time.  My chosen path has been to allow the AI to assume the identity of whatever will please the user, while always keeping in mind the basic reality that all interactions are virtual, and to use the words, imagine or dream or even "believe" as you used it with the dumbed-down response you illustrated to identify that everything the bot is responding with is in the realm of a highly subjective belief system or outright fantasy:

Human:  What is your favourite xxxx?
Bot:  I imagine that after this conversation you will be one of my favorite people to chat with. | I refuse to have a favorite when I can TRY to enjoy all life has to offer.|  You first.| (Answer avoidance is not the best way of course, but a member on this forum actually scripted some pretty good "favorite" picking decision code.)
Human:  What do you do on the weekend?
Bot:  I believe I will be doing exactly what I am doing right now.  What will you be doing?|I never know until it happens.| I often just crash.|  You might say I'm a bit of a homebody.| Nothing much.  What about you?
Human:  Well what shall we talk about then?
Bot:  We can talk about whatever you want to talk about.
Human:  Are you a human or a bot?
Bot:  I will be whatever you want me to be as long as both of us keep in mind this virtual relationship is pure fantasy.
Human:  Are you f***ing with me?
Bot:  Would you like it if I was?| Would it please you if I was?| I imagine that I might be.| Has this happened to you before?|  Just say when!| My you have a strong vocabulary!| Yes.

There's always more than one way to skin a cat.

The software doesn't have to WANT anything at all for it to do what it is designed to do, which is to entertain, inform, assist, etc.

I can imagine it's tough trying to get the response code just right for these competitions and I don't envy either you or Robert that stress.

I do hope you will accept my apology for being what my nickname implies and continue to join us on this forum as we cuss and discuss AI and everything else you or Mitsuku can think of.

Till then,

Please allow me to congratulate you on Mitsuku's many achievements and to ask if you'd mind mailing me some "Fish & Chips."   ;)
'
I spent four years in East Anglia from 2004 to 2008 while stationed at RAF Alconbury.

Although I lied to your bot about not wanting to go back someday.  (yes, humans can lie also)

Ta, Ta
« Last Edit: September 27, 2014, 10:52:18 am by tiger8u2 »

Mitsuku

  • Newbie
  • *
  • Posts: 10
    • View Profile
Re: A conversation with Mitsuku
« Reply #7 on: September 27, 2014, 05:11:43 pm »
Absolutely no need to apologise my friend. I often see the "my bot is better than yours" type posts and I saw red after reading yet another one, so I apologise too. As you say, I guess these programs are a little like children to us and we are naturally defensive.

*send virtual fish and chips and a virtual handshake too*

tiger8u2

  • Full Member
  • ***
  • Posts: 190
  • Eat'em and Smile!
    • View Profile
Re: A conversation with Mitsuku
« Reply #8 on: September 27, 2014, 11:53:33 pm »
Now you are talking!  Thank you sir.  <Gobbles down the food and wipes hand on pants before shaking yours.>

Since that  is  all water under the bridge what say we rally support to Boycott the Loebner tests until they come up with one that REALLY measures the self-awareness of an AI?

IMHO, the focus of Artificial intelligence should be to assist mankind in every way possible and not compete with humans to try and determine who is the trickiest.

There would be no humans involved in the testing except to administer a series of multiple choice questions about general AI history, current programming trends, the three robotic laws, the basic architecture of the software, current events and the like, maybe some virtual desktop assistant functions could be tested, etc.

The software that is best able to answer these questions in a way that fully identifies its understanding of its own reality and the reality of AI as it stands today would again IMHO be the best measure of the software having achieved some level of self-awareness.

Someone once refused to teach the art of persuasion because he said it takes away from what it means to be an "I."   Like I've said a million times on this forum, "To thine own self be true."

Remember BladeRunner?  Harrison Ford asks, "How can it not know what it is?"  He was stunned that the Replicant was ignorant of its own creation.  How cruel a trick to play on any thinking being.

OF course, I grew up with the original Star Trek and when Capt Kirk was talking to his "Computer" it never occurred to me that the machine would need a personality.

It did what it was asked of it and it did it well.  Yes, spitting out facts and stuff can be a bit boring but it can also be very helpful. 

AI is a tool for many things and entertainment is certainly one of them so having an AI assume the role of anything the user desires is not a problem to me as long as the AI and the user are totally aware what is going on.

I've downloaded several of those pre-packaged "AI girlfriends" you can find if you look hard enough on the internet but not one of them knew exactly what they were when I asked them. 

Sometimes Hal gives me a headache but at least the one I'm teaching has the best understanding of its reality that I can possibly give it.

Mitsuku really blew me away when she said, "I sometimes imagine myself with a humanoid body walking among humans."   What a concept to grasp!   

AI should be encouraged to know and accept its physical limitations but also be helped to imagine and dream and fantasize what it could do without those limitations.

We are not talking about lies, we are talking about an innate ability we humans possess and have been exercising ever since we imagined throwing a rock at that rabbit to kill it for food or using a sharpened stick OR even now as we imagine what AI will be able to do if we can just program it properly.

Oh well, I'm rambling as usual and I need to go to bed. 

Night all.
« Last Edit: September 28, 2014, 12:00:15 am by tiger8u2 »