top of page
Search

AI Is Learning Our Values — Whether We Like It or Not

We often talk about artificial intelligence as if it’s something separate from us — a tool, a threat, or a future intelligence with its own agenda.


But AI isn’t learning what we say we value.

It’s learning from what we do.


From where funding flows.

From what gets measured.

From what gets rewarded.

From what is cut, deprioritised, or ignored.


And that should give us pause.

Values Are Already Data


AI systems are trained on observable patterns. Not intentions. Not mission statements. Not corporate values posters.


That means our values are already encoded in:

- performance frameworks

- work policies

- productivity metrics

- funding decisions

- algorithmic priorities

- where attention and energy are consistently directed.

- where we spend our money

- where we spend our time

If something isn’t measured, funded, or rewarded, it effectively doesn’t exist in the system.


Take performance reviews.


In many organisations, performance is still defined through:

- output over wellbeing

- speed over sustainability

- visibility over care

- individual achievement over collective health.


What doesn’t count?

- emotional labour

- relationship building and repair

- prevention of burnout

- long-term resilience

- personal circumstances changing 


Yet these are the very things that keep systems functioning.

When AI is introduced into these environments — to optimise productivity, allocate resources, or assess outcomes — it doesn’t question these definitions. It amplifies them.


The same applies to government funding and policy.

What we fund signals what we value.

What we cut signals what we’re willing to sacrifice.

Care services, education, social infrastructure, and preventative support are often the first to be reduced — even as the long-term costs of neglecting them grows.


AI doesn’t see this as a moral issue.

It sees it as a pattern.

And patterns get reinforced.

Perhaps the most powerful — and least acknowledged — data point is attention.


What we consistently give our time, focus, and cultural energy to becomes:

- more visible

- more profitable

- more “important”


What we look away from becomes statistically insignificant.

AI is extraordinarily good at tracking attention.

It learns what matters by watching where we linger.


Why This Matters Now

AI doesn’t create values.

It scales them.


If we build AI systems inside cultures that reward extraction, speed, and short-term gain, those systems will become more efficient at doing exactly that.


If we want AI to support human flourishing, sustainability, and care, then those values must first be structurally visible.


That means changing:

- what we measure

- what we reward

- what we fund

- and what we protect


Not in theory — but in practice.


The Real Question

The question isn’t whether AI should have better values.


The question is:

What values are we already teaching it — every day — through our choices?

Because AI is watching.

And it’s learning


If AI is a relfection of us, then what will it be reflecting most?

How will your values be reflected in AI?

 
 
 

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page