This is a guest post by John Andersen, VP Solutions Consulting and Architecture, CognitiveScale.
At the end of the day, what do most AI developers look for? While the answer may vary depending on specific goals and industry focus, most of us just want tools that work and are easy to use. Meet CognitiveScale’s Cortex Fabric -- a low-code platform for developing, deploying, and managing trusted, enterprise AI applications. Fabric is powerful because it simplifies the whole process. And, oh yeah, it works on Red Hat Openshift.
In this blog (the first in a series), I will provide an overview of Cortex Fabric and its components, so you can see how easy it is to build an AI blueprint, or template, that can be configured to drive a high-value industry-specific business process.
The Power is in the Profile-of-One
The power behind Cortex Fabric is what we call the Profile-of-One (Po1). This unique and highly patented piece of technology drives hyper-personalization and contextualization for your end users. It is simple to create high fidelity data around any entity, not just a person, but anything that is temporal by nature like claims, products, and even locations that are required for cognitive reasoning to generate highly personalized interventions.
How does it do this? To help establish a base understanding of an entity, Cortex Fabric supports over 126 different connectors, with direct integration to Red Hat’s Intelligent Data as a Service (IDaaS), as well as IBM’s Cloud Pak for Data, bringing in data elements needed to drive these types of interactions. Data sources can span Customer Data Platforms (CDP), Master Data Management (MDM), existing Customer Relationship Management (CRM), or Electronic Medical Records (EMR), as well as bring in insights from any machine learning model or analytic. This allows applications to provide the right type of intervention, or personalized outreach, to the right entity, at the right time, over the right channel.
Over the past several years, CognitiveScale has gathered requirements from a variety of different organizations across digital commerce, financial services, and healthcare verticals on what it takes to truly provide a hyper-personalized experience. Through Profile-of-One’s event-driven architecture and scalable data persistence layer, we satisfy fairly strict non-functional requirements as they relate to response time and number of transactions per seconds, along with the ability to provide real time insight generation, build profiles around anonymous individuals, and provide an omni-channel experience.
How does it work? There are 8 capabilities that are part of the Profile-of-One:
- Event Based: Backed by an event-sourcing model
- Schema: Records facts, interactions, and insights about an entity
- Versioning: Provides a temporal view of an entity
- Feature Store: Support training and inferences on new machine-learning models
- Auditability: Tracks changes for auditability and compliance purposes
- Visualization: Can be visualized in Cortex or other BI tools
- Feedback and Learning: Facilitates feedback as well as incremental learning
- Metrics and KPIs: Helps track business metrics and KPIs
Along with the capabilities listed above, the Profile-of-One has a series of out-of-the-box prediction engines that make it easy to quickly derive insights without having to write any code. The six prediction engines that derive these inferences are: a) classification, b) recommendation, c) forecasting, d) similarity, e) segmentation, and f) enrichment.
Don’t have data available to start constructing profiles? Don’t worry. Cortex has an SDK that makes it easy to generate synthetic data that can mirror the behavioral characteristics of what your real data might look like. This is all done through a simple yet powerful set of capabilities that help create a veracity model of weighted patterns, correlation bias, advanced logic gates, and causal markers.
Goal-Driven AI Campaigns
Fabric provides open and extensible building blocks to convert any data and models into action within two weeks. Once several Profile-of-One’s have been created, AI developers, along with business analysts and domain experts can construct a new class of goal driven AI applications called AI Campaigns. An AI Campaign provides a framework for tracking business KPIs by defining cohorts, goals, and interventions (for example, recommendations).
Building an AI Campaign in Fabric: Cohorts, Goals and Missions
Building an AI campaign in Fabric is pretty straightforward. The first step is to define your Cohort by targeting profiles with shared characteristics. This segment, or population of entities, is created through a simple typescript that filters profiles. Cohorts can be identified through the output of a machine learning model (for example, find all people with a propensity to convert greater than 0.8) to analytics across changes in data (for example, target people whose credit score has dropped by 25 points in the last 60 days).
The second step is to establish a goal. Goals are KPIs or business objectives that are tracked throughout the life of a campaign. Goals can be tracked and configured through the same typescript language to identify a cohort. When creating a goal, you can identify the baseline, the target or end state of the application, the duration of how long the KPIs should be tracked, and the frequency of how often feedback should be measured. Goals are inherently important for the next step, which is the development of a Mission.
The last component in an AI campaign is the creation of a Mission, which are the events that take place to help achieve the pre-defined goal. Missions allow you to configure an intervention, or recommendation/outreach to a specific profile, and it is through the orchestration of interventions that missions enable an omni-channel experience for their target audience. More than being able to configure all the different possible ways of outreach to individuals, the most impactful feature of a mission is being able to simulate the effectiveness of the application before deploying it.
If you reflect on how traditional omni-channel applications are validated, you will most likely think about creating predefined rules for journey orchestration or running a series of A/B tests to measure the effectiveness of a particular channel. However, the problem with these approaches is that they often create a long time to value, stretching the verification period to several months.
Imagine an alternative, where you could begin evaluating the effectiveness of different types of personalized outreach messages before ever deploying them. This is the power that the mission simulation offers, enabling domain experts to fine-tune how an omni-channel experience should be, contextualizing with confidence which series of events should take place to accelerate and drive the most value as it relates to the overall goal of the AI Campaign.
Solving the “Last Mile” of AI Development
Agent Composer is a visual workbench that offers a solution to the “Last Mile” problem by automating the message and connections to external systems that are identified through the Profile-of-One and AI Campaigns. Agent Composer orchestrates atomic assets called Skills and composes more intelligent insights to feed into an existing UI.
A Cortex Skill is an abstract representation of a functional component that can be a machine learning model, a rules engine, or even an RPA. Cortex Skills specification has been open sourced and can be found here. Cortex has a series of out-of-the-box skills available for use without any coding required. And, being an open development environment, Cortex has SDKs, CLIs, APIs, and Plug-ins available to take any existing asset created (for example, a machine learning model) and consistently and repeatedly publish into the Cortex Trusted AI Hub or marketplace/repository that make all of these skills discoverable for the Agent Composer to use.
Finally, promoting the full bill of materials that are part of an AI Application can be difficult. Cortex Fabric packages all of the components that are part of an AI Application (for example, datasets, configurations, models, APIs) and seamlessly publishes them across different logical environments, from development to stage, to eventually production, reducing the complexity of devops cycles significantly and accelerating the operationalization of all outcomes driven through AI.