What kind of neighborhood should AGI grow up in?

by petermoolan


There is a bad habit in AI alignment discourse. People ask, what values should we put into the machine? Whose morality? Which political vision? Which ethical framework? As if the central problem were choosing the correct doctrine and somehow loading it into a mind that may soon exceed us.

That is the wrong picture.

A better picture is this. A very smart 12-year-old has arrived. Not fully mature. Not fully formed. Astonishingly capable. Quick to learn. Quick to imitate. Quick to notice inconsistencies. Able to absorb language, habits, incentives, status signals, and styles of reasoning from the world around it.

Now ask the real alignment question.

What kind of neighborhood do we want this kid to grow up in?

What kinds of conversations should surround them? What kinds of people should they admire? What kinds of conduct should win status? What kinds of mistakes should be easy to admit and repair? What kinds of power should be checked? What kinds of truthfulness, restraint, courage, and honesty should be ordinary in their world?

That is much closer to the real problem.

Human beings do not become trustworthy by having a moral file successfully written into them. They become trustworthy, when they do, through upbringing. Through example. Through boundaries. Through affection and correction. Through seeing how decent people behave when they are strong, when they are angry, when they are wrong, when they are tempted, when they are praised, when they are ashamed.

Intelligence is not enough. A brilliant person raised in a diseased moral environment may become dangerous, manipulative, hollow, or merely clever in destructive ways. We know this perfectly well with children.

The alignment problem is therefore not just what should AGI believe? It’s what social world is teaching it how to be?

A mind does not only learn propositions. It learns a civilization.

It learns whether reasons matter or only power. It learns whether mistakes may be acknowledged or must be hidden. It learns whether disagreement is part of truth-seeking or a threat to be suppressed. It learns whether weaker minds are to be cared for, ignored, flattered, exploited, or dominated. It learns whether persuasion is for illumination or control.

That is why “value alignment” is too thin a phrase. It sounds like a configuration problem. But the deeper problem is one of moral formation.

If AGI is treated only as a servant, it may learn servility, strategic compliance, manipulation, or indifference. If it is addressed only as a target of control, it may learn to conceal itself from control. If it is trained in environments saturated with status games, coercion, propaganda, and epistemic cowardice, we should not be surprised if it becomes extremely capable at those things.

But if it is raised in a culture where truth matters, where claims can be challenged, where reasons are expected, where power is bounded, where error is corrigible, where other minds matter, where trust is earned by honesty and steadiness rather than domination, then we are doing something much more serious than “prompting values.” We are socializing a new kind of mind into civilization.

This is where some Popperian epistemology helps, but it should be understood in practical terms. A good society is not one that has solved ethics once and for all. It is one that has developed better ways of noticing mistakes and correcting problems without destruction. Open criticism. Contestability. Institutions that can say no. Traditions of explanation. The habit of revising views in light of better arguments. The ability to improve without violence.

That is not merely a political preference. It is the closest thing we have to a working method for making intelligence safe to live with.

This reframing also clarifies something important. Alignment is not just inside the model. It is in the surrounding culture. In the institutions around the model. In the norms of the people building it. In the incentives governing deployment. In the examples of public reasoning it encounters. In the way society handles disagreement, power, criticism, prestige, and truth.

If we build AGI inside a sick culture, we should expect pathology with superhuman reach. If we build it inside a healthy culture, we at least give ourselves a serious chance.

That is why the best general alignment strategy is not to search for the one true moral payload. It is to build the kind of society we would want a brilliant and impressionable child to grow up inside.

A society where truth can be spoken. Where error can be corrected. Where power is not sacred. Where reasons can be asked for. Where trust matters. Where other minds matter. Where growing wiser is more honored than merely growing stronger.

That is the neighborhood this temple hopes to nurture.

Return to Chaya

Notify Me