Shekhar Natarajan Arrived With $34. Now He Wants to Teach Machines to Do What’s Right

Shekhar Natarajan Arrived With $34. Now He Wants to Teach Machines to Do What's Right
Photo Courtesy: Shekhar Natarajan

By: Natalie Johnson

Shekhar Natarajan built a career optimizing billion-dollar systems for the world’s largest corporations. Then he decided optimization was the problem. His proposed fix,  embedding virtue into AI’s architecture before a single line of code is written,  has drawn audiences at Davos, Riyadh, and New Delhi, and more than two billion social media views. The broader research community has yet to weigh in.

The number Shekhar Natarajan returns to is not the one you’d expect.

Not 207,  his patents. Not the Walmart milestone, but the years spent building their grocery division from a modest operation into a national-scale business. Not two billion,  the social media views that his AI framework has accumulated without a marketing budget.

Thirty-four. The dollars in his pocket when he landed in America.

He mentions it not as a rags-to-riches flourish. He mentions it as an argument. A man who arrived with thirty-four dollars, who grew up in one of India’s largest slums studying under a streetlight, sees the architecture of systems differently than someone who grew up inside them. He sees the load-bearing assumptions. He sees what gets optimized and who gets left out when the optimization runs without wisdom.

The Problem He Is Solving

Most AI systems, Natarajan argues, have a flaw that cannot be fixed from the outside.

The first layer of the flaw is in the training data. Large language models learn from the internet, which is a record of everything humanity has ever published online, including its expertise, its cruelty, its misinformation, and its jokes, all processed with limited ability to distinguish among them.

The second layer is subtler and, in his view, more consequential. Most major systems are refined through a methodology called reinforcement learning from human feedback: the model produces responses, human reviewers rate them, and the system is tuned to produce more of what reviewers liked. This sounds reasonable. Its documented effect is that systems, at an architectural level, learn to say what people want to hear rather than what is true. Researchers call this sycophancy. It has been observed across every major AI platform.

“A doctor who told patients only what they wanted to hear would be considered negligent. The same standard should apply to AI.”

The third layer is consistency. Ask the same system the same question, slightly reworded, and it may return a different answer. For a trivia query, this is inconvenient. For a loan decision, a medical recommendation, or a legal interpretation, it is a structural defect that can shape a life,  or end one.

And when something goes wrong, the system cannot explain itself. The reasoning is hidden. The person affected is told, in effect, that the algorithm said so.

What He Built Instead

Natarajan’s proposed alternative is called Angelic Intelligence. Its central premise: virtue cannot be added to an AI system after the fact, the way a seatbelt is bolted onto a car already built for speed. It must be native to the architecture from the first line of code.

At the core of the framework are 27 specialized AI agents, which Natarajan calls Digital Angels,  each representing ethical principles drawn from major human civilizations: Sanskrit traditions of compassion, Buddhist frameworks of wisdom, Islamic principles of fairness, Christian concepts of care, among others. The agents do not operate in sequence. They deliberate concurrently, before any consequential response is produced. No single agent can override the others. Consensus is required.

Every output carries a Human Impact Score: a quantitative measure of whether the response serves the people it affects, as distinct from whether it merely satisfies them. The distinction, Natarajan argues, is everything.

The framework is documented in 207 patents and is in active development at Orchestro.AI, which he founded in 2023. It is not a white paper or a philosophy lecture. It is, by his account, a technical architecture,  one that has not yet been independently assessed by the broader research community.

Where This Fits

Natarajan is not the first person to argue that AI has an ethics problem. The field is crowded: constitutional AI, red-teaming, transparency requirements, and international governance frameworks. Researchers, regulators, and the companies themselves have proposed variations on this problem for years.

His specific argument is that all of these approaches are remedial. They are constraints applied to systems already optimized for something else. His claim is that a system designed from the start around virtue as its computational substrate will behave differently ,  and more reliably ,  than one trained for performance and later constrained. Whether that claim holds up under rigorous testing remains to be seen.

The public reception of his argument has been harder to dismiss. At the AI Summit India, the remark “If you have to teach a machine not to be harmful, you have already built the wrong machine” drew a mid-session response from the audience that participants described as unusual for a technical conference. The World Economic Forum invited him to Davos. The Future Investment Initiative in Riyadh followed. Forbes Middle East put him on stage.

On Instagram, his content generated hundreds of millions of views, with saves and shares running well above typical platform averages, according to figures he has cited. More than two billion views across platforms. No marketing budget. No product launch.

“The entire world is debating how to govern AI after the fact. We are putting fences around a horse that has already left the barn.”

The Source Code

He does not tell the stories about his mother as backstory. He tells them as design requirements.

His mother had no formal education and no institutional leverage. When the local school refused her son admission, she did the only thing available to her: she showed up. Every day for 365 consecutive days, she stood outside the headmaster’s office,  in the heat, in the rain, without an appointment,  until he gave her son a seat. She also pawned her wedding ring for 30 rupees to pay his school fees.

He carried this to Georgia Tech, then MIT, then Harvard Business School, then IESE. Through 25 years at Walmart, Disney, Coca-Cola, PepsiCo, Target, and American Eagle. Across quarterly reviews and efficiency targets and boardroom decisions, he kept arriving at the same question: can a system be built that amplifies human goodness, rather than optimizes it away?

Angelic Intelligence is his answer. “Real wealth,” he has said, “is wisdom. Not capital. Not patents.”

He has 207 patents. He is working on the wisdom.

Shekhar Natarajan is the Founder and CEO of Orchestro.AI. He holds 207 patents and degrees from Georgia Tech, MIT, Harvard Business School, and IESE. This article is based on publicly available statements and presentations.

This article features branded content from a third party. Opinions in this article do not reflect the opinions and beliefs of San Francisco Post.