Predictive Risk Intelligence (PRI)
The concept of Predictive Risk Intelligence (PRI) is to help organizations apply analytics to develop a forward-looking lens into its potential risks. The risk management lifecycle can be organized into three categories:
- Reactive risk monitoring is the ability to respond to a post-event with a remediation plan as well as prevention of similar events in the future.
- Integrated risk monitoring is incorporated throughout business processes and be able to timely report on risk based of identified criteria and thresholds.
- Predictive risk monitoring applies analytics to current and historical information to identify potential or emerging risks.
Predictive Risk Intelligence Process
PRI can help turn risk, controls, and performance information into preventative and actionable insights, preparing organizations for a refined understanding of emerging risks. The PRI process is explained below:
1. Define PRI scope: Management and risk governance teams identify prioritized risk events to better track and monitor on a continual basis.
2. Identify precursors of risk events: Each risk identified within scope is analyzed to identify indicators or incidents that precede risk events and provide reliable indication of an event occurrence. For example, product quality failures may result from an internal process failure or a supplier failure.
3. Identify data sources: Each risk event precursor is prioritized and mapped to internal and external data sources which can supply the baseline data required for analysis and predictive modeling — see Figure below
4. Develop static and self-learning predictive algorithms: Through combined analysis of internal and external precursor information, a predictive analytics algorithm (a data-driven statistical model) is selected for fit and applied to predict or detect the heightened occurrence and likelihood of a risk event. Data mining and machine learning capabilities allow these models to be carefully maintained and/or evolve with ongoing improvements to accuracy.
5. Initiate PRI generation: Risk governance functions start collecting the baseline data for each risk category and apply risk predictive algorithms to generate emerging risk alerts and notifications. Results are reported and continuously evaluated against actual results to determine the success rate of the models and enhance the accuracy of insights and outcomes. Formal reports are generated to describe the emerging risk environment for C-suite and board decisioning.
- Security and Privacy Concerns: Ideally, predictive analytics or predictive intelligence involves linking data from multiple sources with social data to identify trends and provide insights for decision-making. With many new sources of data becoming available, such as data from social media applications, aggregating these data sources to achieve predictive analytics raises privacy concerns and requires new ways to preserve privacy (Bates et al., 2014; Petersen, 2018). Several of the concerns, such as automated spear phishing and personalized propaganda rely on the owners of SIS gaining unauthorized access to personal information about individuals. The risks posed by the use of SIS to security and privacy are exacerbated by poor threat-detection methods that misclassify malicious threats as benign, fail to detect key provocations or involve authentication mechanisms capable of misidentification and misinformation due to data misuse (Gupta, 2018; Eric Horvitz, 2017). One example that highlighted some of the privacy concerns with SIS was the Cambridge Analytica-Facebook scandal. In 2016 Cambridge Analytica was involved in creating advantages for candidates in elections in the United States of America (USA) and the United Kingdom (UK). Cambridge Analytica, in conjunction with Facebook, was at the center of harvesting and using personal data to influence the outcome of the US 2016 presidential election and the 2016 UK Brexit referendum. Cambridge Analytica used Big Data and advanced ML techniques to provide a full suite of services to enable highly targeted marketing and political campaigning, which raised concerns with regards to the privacy of those whose data had been accessed (Gupta, 2018; Isaak & Hanna, 2018).
- Integrity: Another ethical issue with predictive risk intelligence relates to a lack of integrity when designing or using algorithms. For many companies, revealing certain information would have a knock-on effect on their business. Therefore, they may compromise the integrity of their processes in order to save their business (Hacker, 2018). There is a potential for an imbalance between a business’s interests and its moral obligations to other stakeholders. For instance, a company may propose a prediction that may improve particular social conditions around the world, but it may be unclear about its social obligations and to whom it is accountable. In a case where a company’s client uses morally unacceptable practices such as discriminatory profiling, there is a much higher likelihood of risk to the company which offers the predictive intelligence if this information is revealed, and so algorithms could be intentionally designed to ignore the practice (TU Wien, 2018).
- Transparency and Fairness of Automated Decision-making: A further concern with the use of AI in predictive risk intelligence is the transparency and fairness of the algorithms used with the intelligence (Wachter, Mittelstadt, & Floridi, 2017b). According to Wachter et al., (2017), this concern arises because SIS use complex and opaque algorithmic mechanisms that can have many unintended and unexpected effects. When it comes to transparency and fairness in the automated decision-making process, such as predictive risk intelligence, users or clients only get a limited idea of why a decision has been made in a certain way, which does not mean the decision is justified or legitimate (Wachter, Mittelstadt, & Floridi, 2017a). Some scholars, such as Hacker (2018), Horvitz and Mulligan, (2015) and Meira (2017), affirm that when it comes to the use of SIS in making decisions, for instance around risk, there may be a lack of transparency around what data is being used to train decision-making algorithms in the first place. A real-life example of such issues is the case of the Wonga payday lender in the United Kingdom. Wonga obscurely used more than 7,000 data points to assess how likely applicants were to default on a loan (Katwala, 2018).
- Algorithmic Bias: There are also concerns about the reliability of using AI in making predictions. For example, AI can learn bias and prejudicial values when they are present within the dataset, leading to unfair or inaccurate predictions (Barocas & Selbst, 2016; Crawford & Calo, 2016). A lack of reliability in the predictions that are made by the SIS can be introduced when certain data are either included in or excluded from the training dataset (Williams et al., 2018). Due to potential bias in developing algorithms, AI can learn pre-existing inequalities that are present in the training dataset, resulting in a bias towards historically disadvantaged populations (Barocas & Selbst, 2016). Additionally, data can be manipulated and misinterpreted according to the predispositions of those who are handling and manipulating data for making predictive intelligence (Katwala, 2018; Terzi, Terzi, & Sagiroglu, 2015). An example of such bias is given by Hacker (2018) regarding the use of SIS in predictive medical intelligence. In predictive medical intelligence, the algorithm may reflect existing biases, with certain medical treatments being chosen on the basis of the practicing physician's specialty. Such issues highlight the importance of integrating data quality protocols and high ethical standards to mitigate bias and discrimination when using SIS for predictive intelligence (Hacker, 2018).
Risk Assessment Framework (RAF)
Risk Based Testing
Risk IT Framework
Risk Management Framework (RMF)
Risk Maturity Model (RMM)
Information Technology Risk (IT Risk)
Operational Risk Management (ORM)
Value at Risk
Credit Risk Management
Value Risk Matrix (VRM)
Advanced Measurement Approach (AMA)
Chief Risk Officer (CRO)
Own Risk and Solvency Assessment (ORSA)
Risk-Adjusted Return on Capital (RAROC)