Matt Sanchez, founder and CTO of CognitiveScale, courtesy photo.

Five years ago, Matt Sanchez founded CognitiveScale, a fast-growing artificial intelligence startup in Austin.

Today, he serves as Chief Technology Officer of CognitiveScale, which has raised $50 million to date and has 150 employees with 90 in Austin. It also has offices in New York, London, and Hyderabad, India.

This week, CognitiveScale held its Cognite2018 conference at the Hyatt Regency in downtown Austin. The conference brings together CognitiveScale’s partners and clients to talk about the practical ways they can apply AI to their businesses. Its customers include USAA, Microsoft, Dell, GE, NBC, and MD Anderson Cancer Center.

Manoj Saxena, formerly general manager of IBM Watson, is the company’s chairman and Akshay Sabhikhi is its CEO. Previously, Sabhikhi was the global leader for Smarter Care at IBM.

During a break, Sanchez sat down with Ideas to Invoices to talk about Responsible AI and the company’s core products. He serves as the company’s principal architect and product developer. He previously served as the leader of IBM Watson Labs and was the first to apply IBM Watson to the financial services and healthcare industries. He also previously worked as chief architect and employee number three at Webify, which IBM acquired in 2007.

Sanchez “earned his BS degree in Computer Science from the University of Texas at Austin in 2000, has been granted six US patents, and is the author of more than a dozen more,” according to his online bio.

Artificial Intelligence is a board term and a has a history going back to the ’50s and ’60s, Sanchez said. Augmented Intelligence, which is what CognitiveScale focuses on, is when machines help humans and extend their capabilities, he said.

“Augmented is not just predictions and machines telling you what to do, it has to tell you why,” Sanchez said.

For MD Anderson Cancer Center, CognitiveScale created a patient concierge system that helps cancer patients learn the basics of living in Houston and receiving treatment from everything to where to go eat to how to get to their appointments. It also includes information about how to find dentists, best diets to follow and more. It adapts to the user and learns what their needs are and serves up helpful personalized recommendations, Sanchez said.

“It really highlighted this notion of a profile of one – a system that learns about the individual and actually adapts the information to the individual,” Sanchez said.

It also worked to develop other applications for MD Anderson including the cognitive help desk, billing department and finance applications, Sanchez said. In all, CognitiveScale has done nine AI applications for MD Anderson.

In the beginning, CognitiveScale focused on healthcare, financial services, digital commerce but it has since developed a platform that is now being applied to a variety of other industries, Sanchez said.

One of the things CognitiveScale commits to is that AI should be practical, scalable and responsible, Sanchez said.

“The ethical implications of AI have to be considered upfront,” Sanchez said.

CognitiveScale has partnered with the government of Canada to create AI applications that help its citizens, Sanchez said.

When implementing a new AI system, CognitiveScale looks at data ownership, explainability, robustness, compliance and fairness, Sanchez said.

“There are lots of examples of AI gone wild where unintended consequences have occurred,” Sanchez said.

It’s rare and they get hyped but it’s important to look at each case and address it and understand why it occurred, Sanchez said.

The biggest barrier to adoption of AI is data, data ownership, data access, and data cleanliness and readiness, Sanchez said.

“Data needs to be nutritious. It needs to be digestible. It needs to be something you have to understand the nature of it before you can apply these tools,” Sanchez said.

One of the principals of any information system is garbage in, garbage out, Sanchez said. If the programmer is setting up a facial recognition system that only detects fair skin people, then it will fail because of bias, he said. It’s important to detect the bias in the system and bring visibility to them and correct the issues before they are deployed, he said.

For more on the interview, listen to the full podcast: