Why can't we just make a "child AI" and raise it?

A proposed solution to the problem of AI alignment is to create an AI that has the same values and morality as a human by creating a child-like AI and raising it in a similar way as one would raise a human child. There’s nothing intrinsically flawed with this procedure. However, this suggestion sounds simpler than it is.

If you get a chimpanzee baby and raise it in a human family, it does not learn to speak a human language. Human babies can grow into adult humans because the babies have specific properties, e.g. a prebuilt language module that gets activated during childhood.

In order to make a child AI that has the potential to turn into the type of adult AI we would find acceptable, the child AI has to have specific properties. The task of building a child AI with these properties involves building a system that can interpret what humans mean when we try to teach the child to do various tasks. Some organizations are currently working on ways to program agents that can cooperatively interact with humans to learn what they want.