Building digital trust in the workplace means including employees in developing technology strategies. Image: iStock/danchooalex

WEF
Why workers must be at the centre of the digital trust agenda — and 4 ways to get there
Daniel Dobrygowski

Head, Governance and Trust, World Economic Forum

Giannis Moschos

Community Lead, Civil Society, World Economic Forum

 

  • Emerging technologies such as artificial intelligence (AI) are transforming many workplaces at an unprecedented speed.
  • It’s important for governments, businesses, employees and communities to collaborate on building a culture of digital trust as the use of such technologies continues to grow.
  • A recent World Economic Forum panel discussion explored how businesses, workers and labour unions can integrate worker perspectives into digital trust frameworks across the technology value chain.

As artificial intelligence (AI) and other emerging technologies transform the workplace at unprecedented speed, digital trust is no longer a “nice-to-have”, it’s a necessity. Trust underpins innovation that is not only efficient, but fair, transparent and accountable. All stakeholders must be involved in building that trust, from governments and businesses to individuals and communities. Critically, that includes workers.

Workers occupy a unique and essential place in this conversation. They help design the technologies shaping our world and they use tools developed by others. They are directly impacted by how those technologies are deployed, whether on the factory floor, in fulfillment centres, in hospitals or behind a screen. As AI becomes deeply embedded in the future of work, worker participation is more than a matter of compliance, it’s about trust.

Have you read?

As part of its Dialogue Series: Business and Labour and its Digital Trust Community, the World Economic Forum recently hosted a panel discussion that explored actionable ways that businesses, workers and labour unions can integrate worker perspectives into digital trust frameworks across the technology value chain, and how they can all work together to build a culture of digital trust.

Here are four reflections from the conversation:

1. Trust begins with inclusion so workers must be at the table

No employer – or employee – can fully benefit from AI in the workplace unless trust is built into the process of developing an AI strategy. That trust is built when workers understand how AI tools function, how these technologies will impact their jobs and how they can actively contribute to shaping their development and deployment.

“We do see workers, their trade unions and employers going down this road, but there are some key things that are required on this journey,” said Christy Hoffman, General Secretary of UNI Global Union. “[Including] collective bargaining, advanced notice, joint risk assessment and training.”

An ongoing tech-labour partnership between Microsoft and the AFL-CIO (the largest US labour union federation, representing 15 million workers) offers a potential blueprint for how business and unions can collaborate to shape responsible AI. This partnership advances three core priorities: expanding access to AI training, facilitating direct input from workers and labour leaders to technology developers, and jointly advocating for public policy that supports equitable AI integration.

Crucially, this isn’t just about listening. As Julie Brill, Corporate Vice President, Trusted Technology Group at Microsoft, emphasized during the panel discussion, it’s about building structured mechanisms for continuous input. This ensures workers’ voices remain a constant in the digital trust equation. Governance frameworks like these turn a vision of trusted AI into the standard.

2. Without continuous dialogue, transparency falls short

Transparency isn’t just a governance principle, it’s an enabler of trust. When organizations are clear about how AI tools are being deployed, what they’re designed to do and which guardrails are in place, it helps to reduce uncertainty, foster understanding and strengthen employee confidence.

This is especially important in areas where AI intersects with performance and productivity. Workers need reassurance that AI won’t be used to monitor, rank or replace them without their knowledge or input. When people understand the purpose and limits of these systems, they’re often far more likely to engage with them in a constructive way.

The panel broadly agreed that transparency can’t be a one-time announcement. It must evolve as the technology does. As AI changes how work is done — whether through task redistribution, workload intensification or automation — open, ongoing dialogue is needed. These conversations should be grounded in shared assessments of risk and impact. They should also include a space for worker concerns, questions and ideas.

Some companies are building infrastructure to support this. At Salesforce, internal AI councils serve as a platform to bring workers into key decisions around AI deployment. These councils help ensure that emerging technologies align with company values and meet the expectations of employees.

“Transparency is also helpful to the company,” Ed Britan, Senior Vice President, Global Privacy, Salesforce, explained during the event. “If companies don’t impose clear boundaries around internal data use, employees may become sceptical — not only about how their own data is handled, but about how they’re expected to explain these technologies to customers.”

Ultimately, transparency isn’t just about disclosure but collaboration. Creating systems where workers are informed, empowered and included in governance isn’t just a way to manage risk, it’s a way to build the mutual trust that allows innovation to thrive.

3. Using AI agents to augment human potential

AI agents are software systems that use artificial intelligence to pursue goals and carry out tasks on behalf of users. Their introduction marks a significant shift in workplace technology. By automating repetitive or routine functions, these agents have the potential to relieve workers of low-value administrative burdens and allow them to focus on tasks that require human skills like empathy, creativity and judgement.

But the promise of AI agents goes beyond efficiency. When implemented responsibly, they can also enhance privacy by reducing the need for human access to sensitive data, according to Britan. In customer service, for instance, agents can triage requests and retrieve information without exposing personal details. This is a meaningful step toward safer data handling.

This power also requires clear boundaries, however. AI agents should augment, not replace, human decision-making. They should take over the predictable, not the nuanced. Without clear communication around their limitations, there’s a risk that expectations will outpace reality, eroding trust.

Organizations must not only clarify what agents are for, they must empower workers to question when and how they’re used. That includes giving employees the authority to override, escalate or disable AI-driven processes when human judgement is needed.

As Britan pointed out during the panel, it’s important to cultivate a “beginner’s mindset”. Workers need room to experiment, make mistakes and learn together. Paired with governance and feedback loops, this mindset helps build a workplace culture where trust in technology is earned, not assumed.

4. Invest in people – globally

Helping workers succeed in a technology-driven world means more than simply introducing new tools, it requires real investment in people. Skill-building and lifelong learning are foundational to an inclusive digital economy. Initiatives that have already reached millions globally demonstrate what's possible, but the work is far from finished.

As Hoffman noted during the discussion, digital trust must extend beyond office settings and into the broader tech ecosystem. Behind many AI systems are workers in major outsourcing hubs in Asia, Africa and Latin America. These content moderators, data annotators and platform contractors play a vital role in training and maintaining the technologies we rely on. While essential, these roles are often marked by low pay, limited protections and challenging working conditions – particularly in areas like content moderation.

To strengthen digital trust globally, companies have an opportunity to lead by example by investing in fair compensation, robust wellbeing support and clear contractual standards that recognize the value of this work. Building equity into the digital supply chain is not just a matter of ethics, it’s a foundation for a more resilient, ethical and trusted AI ecosystem.

The panel agreed that digital trust is a shared commitment. From design to deployment, and from training to ongoing oversight, workers must be included as partners in shaping this technology.