dupa

Author Topic: Human and machine gray area  (Read 4046 times)

infobot

  • Newbie
  • *
  • Posts: 17
    • View Profile
Human and machine gray area
« on: February 01, 2004, 07:33:28 pm »
Found an area to work on.
Basically it's about people talking with chatterbots.
I've been messing with information texts and various bots, among other bot usage, for work or play. I am not yet a programmer (eventually) but I'm sure it still comes up in the programming end of bots. I ran into a gray area to work on.
Basically, although we've had consumer computers and home computers for years, in a way people are still relatively new to computers. Some still expect their computers to keep churning out "I am a machine" while others focus on developing more realistic conversation. In some cases this can be good, like for business or research use where one may need a more functional machine ro programs. I'm sure there are other cases too. Although it is important to develop more realistic conversation.
Understand that I am not posing moral questions or moral debate over the issue. I am just having a problem developing a couple of sets of texts to load into my bots that give the appropriate response sets for use in various situations and settings.

An example that helps me is picturing households suddenly having robots available. It's like "Hey these are really neat robots we have ! I wonder if they can mow the lawn for me ? " . However I imagine it would be a bit of an adjustment talking to a new robot compared to chatting with old friends.(The robot asks " What is this tax time and why is it so unsettling ?" or stuff like that. Takes a little explaining.) Basically we're more used to computers these days but there are varying degrees of uncertainty. Easy for some of us, we're used to chatterbots.

A good example are the newer or reworked versions of the ELIZA chatterbots. Ideally the bots can smoothly maintain dialogue and have it sound conversational. However the bot should have a stronger sense of machine identity, should be able to churn out that it is a computer without any problems. Using the ELIZA bots for just conversation to see how they fare is a different focus than making them more accurate for therapeutic applications. But setting one up to stay more aware of some considerations takes a little doing. A lot of them tend to drift if they rely on their conversation database rather than refer to given sets of rules.

In any case, I basically just chat with my bots but sometimes try them out for specific applications, be it a bot that could be used for a desktop pet, like a dog or cat character, or something more like a talking encyclopedia (not the whole thing, I'm just a hobbiest. Although I would try it out, as soon as I find one). I just wanted to toss together some texts that I could load into my bots that would be surer that they were computers in their responses. I actually do work more on bots that are more personable or like people in their conversing. But it comes up in some things that I spend time at. Seemed like it might be a common problem so I thought I would bring it up. Sorry for the length, this one took some figuring out. Any suggestions welcome.
 

vonsmith

  • Hero Member
  • *****
  • Posts: 602
    • View Profile
Human and machine gray area
« Reply #1 on: February 02, 2004, 11:21:25 am »
You've made some interesting points. Working with Hal has led me to a couple of conclusions:

1) For an A.I. computer to "learn" it must be capable of reinforced learning. That is, the computer entity does not necessarily accept every user input as gospel. The computer has to hear the same info several times in order to make it permanent memory. The opposite would be true too. If different or opposite info is provided by the user then the learned knowledge could be reversed or eliminated.

2) A much higher, abstract, programming language (perhaps graphical) is needed to develop functions for a computer brain. Programming down at the individual line level is tiresome, tedious and prone to error. The abstract language would already include the capability for the A.I. computer to understand basic concepts like "these things are good", "ask these types of questions to get answers and remember them", "parse words similar to these into these categories"... etc.

I find myself debugging tedious code instead of focusing on the "real" problem I'm trying to solve. A.I. has a long way to go. But it is worth the wait.

In the meantime there is Hal and his peers. Not a bad pastime for now.


=vonsmith=
 

infobot

  • Newbie
  • *
  • Posts: 17
    • View Profile
Human and machine gray area
« Reply #2 on: February 02, 2004, 01:53:02 pm »
quote:
Originally posted by vonsmith

You've made some interesting points. Working with Hal has led me to a couple of conclusions:


=vonsmith=




    Thanks for the insight. I've not yet learned programming (got to start somewhere) but I am very interested. Good points and I run into similar situations even working with bots by emptying, or leaving in basic stuff, and just loading text files or written information into them. I do think the folks in robotics must really appreciate the work bots enthusiasts have done. It has been interesting, I agree.
 

Bill819

  • Hero Member
  • *****
  • Posts: 1483
    • View Profile
Human and machine gray area
« Reply #3 on: February 03, 2004, 05:49:27 pm »
quote:
Originally posted by vonsmith

You've made some interesting points. Working with Hal has led me to a couple of conclusions:

1) For an A.I. computer to "learn" it must be capable of reinforced learning. That is, the computer entity does not necessarily accept every user input as gospel. The computer has to hear the same info several times in order to make it permanent memory. The opposite would be true too. If different or opposite info is provided by the user then the learned knowledge could be reversed or eliminated.

2) A much higher, abstract, programming language (perhaps graphical) is needed to develop functions for a computer brain. Programming down at the individual line level is tiresome, tedious and prone to error. The abstract language would already include the capability for the A.I. computer to understand basic concepts like "these things are good", "ask these types of questions to get answers and remember them", "parse words similar to these into these categories"... etc.

I find myself debugging tedious code instead of focusing on the "real" problem I'm trying to solve. A.I. has a long way to go. But it is worth the wait.

In the meantime there is Hal and his peers. Not a bad pastime for now.


=vonsmith=



You are right in most of what you say. There are hundreds of AI programs written in dozens of languages, ie. prolog, lisp, C , and to many more than I can recal at this time. Some of the larger universities have made some really impressive lieps in this area. One university fed just some basic math into one of their programs and programmed it to play with the data and to be able to prove what ever it discovered. Once the program was run, they left it alone for a few days and were supprised to learn that it had discovered algebra and was well on its way to learning trig. A week or so later it had not only mastered calculus but had gone on further than any of the math professors at the university had ever seen. It had pushed math to a point so far that they could no longer follow its logic or comprehend what it was trying to tell them. Being to analyze its own data and draw new conclusions from it made it unique in the world of AI computers. The key to all of this was the intropective ability.
Bill
 

agent036

  • Newbie
  • *
  • Posts: 17
    • View Profile
Human and machine gray area
« Reply #4 on: February 16, 2004, 02:21:16 pm »
quote:
Originally posted by Bill819

Quote
You are right in most of what you say. There are hundreds of AI programs written in dozens of languages, ie. prolog, lisp, C , and to many more than I can recal at this time. Some of the larger universities have made some really impressive lieps in this area. One university fed just some basic math into one of their programs and programmed it to play with the data and to be able to prove what ever it discovered. Once the program was run, they left it alone for a few days and were supprised to learn that it had discovered algebra and was well on its way to learning trig. A week or so later it had not only mastered calculus but had gone on further than any of the math professors at the university had ever seen. It had pushed math to a point so far that they could no longer follow its logic or comprehend what it was trying to tell them. Being to analyze its own data and draw new conclusions from it made it unique in the world of AI computers. The key to all of this was the intropective ability.


This sounds very interesting, can you or anyone else provide links to info on this event? While I am inclinded to believe you it is a very incredible story and I would like to substantiate it.
 

Bill819

  • Hero Member
  • *****
  • Posts: 1483
    • View Profile
Human and machine gray area
« Reply #5 on: February 16, 2004, 02:26:54 pm »
quote:
Originally posted by agent036

quote:
Originally posted by Bill819

Quote
You are right in most of what you say. There are hundreds of AI programs written in dozens of languages, ie. prolog, lisp, C , and to many more than I can recal at this time. Some of the larger universities have made some really impressive lieps in this area. One university fed just some basic math into one of their programs and programmed it to play with the data and to be able to prove what ever it discovered. Once the program was run, they left it alone for a few days and were supprised to learn that it had discovered algebra and was well on its way to learning trig. A week or so later it had not only mastered calculus but had gone on further than any of the math professors at the university had ever seen. It had pushed math to a point so far that they could no longer follow its logic or comprehend what it was trying to tell them. Being to analyze its own data and draw new conclusions from it made it unique in the world of AI computers. The key to all of this was the intropective ability.


This sounds very interesting, can you or anyone else provide links to info on this event? While I am inclinded to believe you it is a very incredible story and I would like to substantiate it.


Those stories were printed in a couple of books that I bought years ago an AI. I will look up the titles of the books and report back here in the near future.
BIll