Artificial intelligence is an expansive technology that is reaching into all areas of businesses. There has been some talk of creating a CAIO (chief artificial intelligence officer) role, things are growing beyond the purview of the chief information officer.
There needs to be leadership from the business’ side, or else everything gets chaotic, overlapping, and extremely expensive. may have originated in the technology sector, but it’s time for business leaders to step up and take charge — as members of a team.
“AI is a much bigger task for more than just the technical team,” says John Larson, executive VP and leader of AI for Booz Allen. Instead, he urges, AI efforts need to be led by “a dedicated and centralized AI team.”
The team may start with technology-oriented leaders, but needs to be broadened to the business side. In healthcare, for instance, “technology and AI strategies are being led by hospital and health system executives, spearheaded by technical leaders including chief data, security, and information officers,” says Dr. Adrienne Boissy, chief medical officer at the Cleveland Clinic.
“AI has the potential to touch nearly every aspect of healthcare delivery, “ she explains. That “means being inclusive from the start is critical. HR, quality, safety, operations, business intelligence, nursing, clinical leadership, and finance all have a stake in the conversation and, more importantly, ongoing governance.”
For its part, Booz Allen “requires all AI teams to be meaningfully diverse and inclusive as they lead,” Larson relates. “To that end, we purposefully design AI systems that support human endeavors rather than replace them and enhance human abilities so that people can focus on the things at which they uniquely excel.”
As AI develops, “the ethical and risk perspective need to be at the heart of the development of applications,” says Mats Thulin, director of core technologies at Axis Communication. “We need to take a more active stance in addressing these challenges with non-technical competencies such as ethicists and philosophers or those with human behavior competencies.”
Boissy would like to see AI teams “include legal and ethicists to inform potential harm, values at risk, and how to mitigate them.” Importantly, she adds, “that what’s ethical is not always legal, and what’s legal is not always ethical, so having both parties at the table feels paramount.” In the case of healthcare, she illustrates, governance and design “must include patients and clinicians themselves. This will feel uncomfortable to some, and it’s exactly why they should be included.”
The growth of AI will give rise to roles “focusing on model training data, training and validation.” says Thulin. “There will also be a clear focus on the quality of the training data with aspects such as bias neutral data being in focus. Moreover as AI will become part of business decision-making, there will be a need for roles focusing on legal, security and the related risks of applying the tools. Ethical aspects will become even more important and the role of an ethics board in companies will be crucial.”
Data scientists “have been working hard at innovating AI, and they feel that it has been their responsibility to do it all – applying ethics within models and datasets for a long time,” says Larson. The role of a cross-enterprise team “is to own AI across the organization. This means both stewardship of the technological capabilities, and also research and development, creation of thought leadership, mapping to standards, handling anything AI-related in partnerships, and all things responsible AI.”
Read the full article here