The Visibility Gap: Why Credentials Often Say Too Little
Visibility isn’t binary.
Learning isn’t invisible just because it isn’t recognised. In some cases, it’s invisible because the evidence is thin, generic, or disconnected from outcomes.
There are a lot of badges being issued, Credly alone has issued more than 100 Million digital credentials. As I have been purposefully searching for and looking at these digital credentials my focus has been less on the fact the learning has been acknowledged, and more about what has actually been made visible. What does a credential really tell someone about what a person knows, can do, or has experienced? And just as importantly, what does it leave unsaid?
This week’s piece explores that gap, not the absence of credentials, but the absence of meaningful evidence, and the missed opportunities that creates.
When recognition isn’t evidence
Many credentials do their job, at least on the surface. They confirm that someone attended a course or completed a learning experience whether formal or informal. They signal participation and they provide acknowledgement. But when you look a little closer, the evidence often tells us very little:
How long was the learning experience?
Was it facilitated, self-paced, online, in-person?
Was there any form of assessment, practice, or feedback?
What was expected of the learner beyond attendance?
In these cases, learning hasn’t gone unrecognised, but the impact and outcomes remain invisible. The credential exists, but the signal is weak. It’s like food labels that claim “Supports brain function” or “Promotes vitality”, sounds good but rarely defined, measured or proven.
Credentials as artefacts of completion
There’s another, subtler form of invisibility when credentials become a hygiene step. You complete the learning, a badge or certificate is issued at the end, because that’s just “what we do”. When this happens, the credential isn’t designed as evidence. It’s an artefact of completion.
The problem with hygiene credentials isn’t that they exist, but that they’re rarely comprehensive. Little thought is given to what they actually communicate, who they’re for, or how they might be used beyond signalling participation. As a result, they often flatten rich, complex learning into a single, generic marker that travels poorly outside the context in which it was issued.
This is where I think we need to flip the frame. Instead of starting with the course and adding a credential at the end, we need to start with the evidence we want to make visible, and work backwards. What do we want someone else to be able to understand about this learning experience? What capabilities, behaviours, or contexts matter? When evidence becomes the design anchor, credentials stop being a formality and start becoming a bridge between learning and its real-world value.
The invisible learning we still ignore
There’s another category of learning we rarely talk about in visibility conversations: mandatory and compliance learning. So much corporate learning sits in HR systems. Anti-money laundering. Fraud awareness. Cyber security. Health and safety. It’s required and people need to complete it, but then it disappears.
Imagine if that learning became portable, verifiable evidence. Imagine arriving in a new organisation with proof that you’ve already met current regulatory standards, rather than sitting through the same training again.
The benefits aren’t just individual. They’re systemic:
Reduced retraining
Better engagement
Less “tick and flick” learning
Stronger signals of real capability
Again, learning is happening. The visibility just isn’t designed.
Why visibility increasingly matters
Learning has always been happening everywhere: across work, volunteering, communities, and life experience. What’s changed isn’t that people are learning (although online and AI are rapidly changing the way the learn). What is being massively disrupted is the traditional proxies we relied on, degrees, job titles, institutional brands, they are no longer doing enough of the work for us.
In the past, we didn’t look for granular evidence because we didn’t need to, or couldn’t use it. The systems on the other side weren’t designed to interpret nuance.
Now they are. And suddenly, visibility matters in a much more precise way.
Habitat for Humanity
This week’s Who’s Badging looks at a volunteering badge issued by Wellmark, Inc. to recognise their staff who volunteered with Habitat for Humanity.
Let me be clear upfront: this is a good thing, I support this. Recognising volunteering matters and acknowledgement matters. In this instance what’s being evidenced is the act of volunteering and the values associated with it: caring, relationship building, giving back to the community. In effect, a “thank you for being a good human” badge.
Unfortunately this is a familiar pattern in the volunteering space and what’s missing is the opportunity to surface transferable skills. Anyone who has volunteered with Habitat for Humanity will have:
Completed site induction and safety briefings
Worked as part of a team in real-world conditions
Followed procedures on an active, potentially hazardous worksite
Operated under supervision, time constraints, and shared responsibility
None of this confers mastery after a couple of days. But it does matter. And when combined with other experiences over time, it becomes powerful, portable evidence. The issue isn’t the badge. It’s the missed opportunity to add value through better evidence design.
Skills translators and visibility at scale
I’m particularly interested in the growing role of skills translators in the ecosystem. They’re not just another taxonomy or framework. They’re an interpretation layer. A way of making sense of evidence that already exists, but hasn’t been usable at scale.
For the first time, we’re seeing infrastructure that can actually do something with richer evidence: connect learning experiences to skills language, make them legible to employers, and meaningful to learners themselves.
Brandon Dorman released the Skills API Translation Service in the last few weeks and is offering it as a free service https://substack.com/@brandondorman/p-183936351
These types of initiatives end the argument about which skills taxonomies to use because increasingly they all talk to each other, either through API’s or AI. As Simone Ravaioli says.
Stop looking for the “perfect” standard. Start building for a polyglot future where translation is a utility, not a barrier.
Without interpretation, even good evidence stays invisible.
The Badge Sandpit - 12 Resolutions for the New Year
We’re now halfway through the Badge Sandpit New Year resolutions, have you tried any of these.
Open a wallet and claim a badge
Test what you know and earn a badge
Claim and issue peer recognition badges
Use Pythea, TalentPass’s AI agent to build a rich skills profile
Use CLOCK Skills Discovery AI coarch to articulate and evidence your skills
Claim a skill and build your evidence
Backfill the credentials you already earned with Pearsana
Jump into the Badge Sandpit and have a play
Chance to catch up
In February, I’ll be running a workshop on micro-credential visibility across the learner journey at the Digital Credential Summit in Philadelphia.
We’ll explore where visibility is gained, where it’s lost, and how to design evidence intentionally across learning, work, and life. I’m really looking forward to those conversations, and I hope to see some of you there.
Because the question isn’t whether learning is happening - it is. It’s whether we’re building systems that allow it to be seen, understood, and valued.
Wendy







