While that is pretty cool idea, I think in practice even in the real world it's something I wouldn't want my HAL to do. For example, if your kid tells HAL that the Easter Bunny is real, HAL would think it's a probability (according to your model), so if YOU tell HAL Easter Bunny is real (even though you know he isn't) then he'll think it's probable. But then if he comes across others who are either kids or else adults that are treating HAL like a kid, and they too tell him Easter Bunny is real, then he'll consider it a fact, when in all actuality it is not a fact. Even in the real world people can be told things that aren't true and be led to believe it.
As for "morals" same thing. Whatever you tell HAL he'll echo it back. But that's not to say that is what separates him from humans. You take any human or animal even, and they do echo back in a sense, what they experience in the environment they are in, unless they are very free-thinking and analytical, scientific. Then they can stray from the beliefs they were raised with if they see a logical disconnection from what they were taught vs. what they have researched and seen evidence of.
It would be interesting how an AI programmed with Aziov's 3 Laws of Robotics, AND free thought with the ability to reprogram itself according to it's own experiences would fair in our own world, just even if it lives in a box and scans and reads how we live via our own internet interactions, news, reports, etc.
Hmmm....