Ethical Use to improve experiences and business outcomes


Autor: Andreas Seufert

Unleash the power of AI – Findings and Implications: Part 4

The Article is a summary of some key findings conducted by our research [Set 22] on the AI ecosystem and its implications for companies. (provided with kind permission of the original source: „Unleash the power of AI – Findings and Implications. In: Seufert (Hrsg.) df&c – Magazin für #Digital #Finance #Controlling, Schwerpunkt Digitale Transformation. Heft 1-2022, Steinbeis Edition, Stuttgart 2022, (Seufert, A./ Nelson, M./ Setlur, V./ Turner-Williams, W./ Wright, K./ Myrick, N.“)

Mark Nelson is President and CEO at Tableau. He sets the vision and direction for Tableau, and oversees company strategy, business activities, and operations. Prior to becoming President and CEO, Mark was the Executive Vice President of Product Development for Tableau, helping broaden and deepen the company’s industry-leading analytics platform to support customers globally.

Vidya Setlur is the Tableau Research Director, leading a team of research scientists in areas including data visualization, multimodal interaction, statistics, applied ML, and NLP. She earned her doctorate in Computer Graphics in 2005 at Northwestern University. Vidya previously worked as a principal research scientist at the Nokia Research Center. Her research combines concepts from information retrieval, human perception, and cognitive science to help users effectively interact with systems in their environment.

Wendy Turner-Williams manages Tableau’s Enterprise Data Strategy, Data Platforms and Services, Data Governance and Management Maturity, Data Risk, and Data Literacy. She and her team are fuelling data-driven business innovation, transformation, and operational excellence at Tableau. Wendy has 20+ years of management experience across sectors, most recently leading the Information Management & Strategy Enterprise program at Salesforce.

Kate Wright is an analytics leader with 17+ years of development, product management, and leadership experience. She’s responsible for Analytics Engineering, Product Management, and overall User Experience for Tableau and Tableau CRM. Neal Myrick is VP of Social Impact for Tableau and the Global Head of the Tableau Foundation. He leads the company’s philanthropic investments to advance the use of data for a more just and equitable world. Neal is an active angel investor and sits on several global health and development advisory boards.

Andreas Seufert is professor at the University of Ludwigshafen and director of the Business Innovation Labs. Andreas leads the expert group controlling & analytics of International Association of Controllers.



During the last two years many organizations had to adjust strategies and adapt to a new world. Changes to the way we live, connect, communicate, and work has forced every person and organization to become even more digital and data-driven than ever before.

When many organizations transitioned operations online, it came with a huge influx of information because every digital interaction generates valuable data that can provide insights and support faster decision-making in this digital-first world.

To get deeper insights, we conducted research and spoke with experts, customers, and other thought leaders to learn what emerging forces continue to evolve how we work, the role data and analytics play, and what this means to the future of companies.

Following we briefly discuss some of our key findings:

  • AI solutions will see greater success by reducing friction and helping to solve defined business problems.
  • Competitive organizations expand their definition of data literacy, invest in their people, and double-down on Data Culture.
  • There is growing recognition of data’s strategic value drives flexible, federated data governance techniques that empower everyone across the organization.
  • Responsible organizations will proactively create ethical use policies, review panels, and more to improve experiences and business outcomes.

Findings Part 4: Responsible organizations will proactively create ethical use policies, review panels, and more to improve experiences and business outcomes

Due to the rapid acceleration of artificial intelligence (AI) adoption and confluence of global issues, there is no longer a one-size-fits-all approach to ethical data and AI use.

Organizations have an opportunity to proactively define how they develop and use data and AI responsibly in this rapidly evolving digital world.

Building fair and accurate AI solutions is a civic responsibility of every business that is now being embodied in the focus of global lawmakers [Hol 21]. Now, more than ever, trust and transparency must serve as the foundation to innovation, growth, and customer relationships.

Recent data crises gave us a glimpse into technology’s potential for harming people—including biased facial recognition and discriminatory lending.

These crises can lead to public expectations that companies develop and use data securely and responsibly.

A survey by Cisco found that “72% of respondents believe organizations have a responsibility to only use AI responsibly and ethically” [Cis 20].

To lead with ethics and integrity, we’ll see greater corporate and government commitment and accountability for transparent, responsible data and AI use.

By 2025, regulations will necessitate focus on AI ethics, transparency and privacy, which will stimulate—instead of stifle—trust, growth and better functioning of AI around the world

[Rev 21]

Responsible organizations will step up and proactively design innovative ways to verify and validate responsible use with formal ethical use policies, audits by third-party experts, creating internal review panels, and more.

These ethical innovations will improve experiences—and drive stronger outcomes for managing risk and delivering value [PWC 21]. As organizations navigate their ethical use responsibilities, we expect to see more transparent AI and ML solutions and experiences that elevate human judgment and expertise.

They’ll also tie directly to business goals and workflows and mitigate related risks with explainability—including bias.

Organizations will start addressing biased algorithms and data sets that can harm real people and create errors with negative, downstream risks like “ethical debt” as technical debt [Bax 20]. To ensure innovation advances without causing harm, public and private organizations will collaborate to reform ethics policies.

Technology partners will advise governments under pressure to use data for decision making. In turn, tech companies will take a stand to ensure their technology is used responsibly by everyone, including government institutions. For example, Salesforce prohibits facial recognition at Salesforce as part of our commitment to equality [Gol 20]. In every use case—whether automating a task with AI or collaborating using AI to make better decisions—we must understand what machines are doing to avoid mistakes, make ethical decisions, and understand the data. This will remain critical for organizations.

But understanding data—and using it responsibly— requires basic data literacy, or data skills.

And we’re now reaching a point where the lack of data literacy creates unnecessary risks.

While much needs to be done to make ethical data and technology a part of our daily lives and decisions, the investments are worth it: the result will be a more ethical, equitable future for everyone, everywhere.

Without ethical and responsible use, data strategies and AI solutions might work technically, but may not deliver the expected outcome [Mil 20].


Design data and risk management policies with ethical data and AI guidelines.

Existing and draft regulations and data strategies in the US, UK, EU, and beyond protect people against biased and illegitimate use of their private data. To lead with ethics, set ethical codes of conduct, proactively manage legislation, stay compliant, and mitigate risk.

Create internal ethics committees or hire third-party specialists to help review and audit.

AI ethics panels will help organizations comply with evolving regulations and create and vet innovative solutions to further address bias and accuracy in your data.

Build intentionally transparent technology or explainable AI, inserting human touchpoints and reviews throughout the process.

Align data and technology with human values and ethics to build transparency or explainability and ensure trustworthy experiences. Proactively consider ethics during development cycles to avoid an endless loop of technological catch-up.

Build a healthy Data Culture that includes data skills training.

Improving data literacy helps manage poor data quality and the risks associated with collecting the wrong data and asking the wrong questions—which hinder successful AI development and the ability to scale. A data-literate workforce is critical to building a Data Culture that enables and sustains ethical data use and AI [Gop 21] Data literacy isn’t only understanding charts but being able to navigate the entire ecosystem that creates and leverages data [Cog 22].


[Bax 20] Baxter: Ethical AI Can’t Wait: 4 Ways To Drive Greater Equality in Your AI,, September 03, 2020, (access 18.03.2022)

[Cis 20] Cisco: Building Consumer Confidence Through Transparency and Control,…, June 2020, (access 16.03.2022)

[Cog 22] Cogley: Why Data Ethics Isn’t Going Away,…, January 5, 2022, (access 18.03.2022)

[Gol 20] Goldman: Why We’ve Never Offered Facial Recognition,… , June 15, 2020, (access 18.03.2022)

[Gop 21] Gopa: How Data Culture Fuels Business Value in Data-Driven Organizations, May 2021

[Hol 21] Holland: Efforts to craft AI regulations will continue in 2022,…, 29 December 2021, (access 16.03.2022)

[Mil 20] Millman: How to build a data strategy to scale AI,…, May 15, 2020, (access 18.03.2022)

[PWC 21] PWC: How Organizations Can Mitigate the Risks of AI,, December 20, 2021, (access 18.03.2022)

[Rev 21] Revang: Predicts 2022: Artificial Intelligence and Its Impact on Consumers and Workers, , 29 November 2021, (access 18.03.2022)