Why can't we just make a "child AI" and raise it?

A proposed solution to the problem of AI alignment is to create an AI that has the same values and morality as a human by creating a child-like AI and raising it like a human child. This suggestion sounds simpler than it is.

If you raise a chimpanzee baby in a human family, it does not learn to speak a human language. Human babies are able to learn normal human behaviors in part because they have specific "built in" properties, e.g. a prebuilt language module that gets activated during childhood.

In order to make a child AI that has the potential to turn into the type of adult AI we would find acceptable, the child AI has to have specific properties. The task of building a child AI with these properties involves building a system that can interpret what humans mean when we try to teach the child to do various tasks. Some organizations are currently working on ways to program agents that can cooperatively interact with humans to learn what they want.