Risk Detection Can Not Be Automated
No matter how many impressive white papers, including this recent one on Uncovering Surprising Supplier Behaviours Creating Organizational Risk by Atlantic Software Technologies, Inc. (an IBM Software Value Plus Business Partner). This white-paper recommends automation of inbound data classification to expedite throughput because automation of this function enables the organization to redeploy up to 40 percent of staff while increasing processing throughput as much as threefold. This is important because one cannot assess the true business value of a supplier relationship unless one understands his or her own personal relationship with the supplier. And, in order to really get a handle on the quality of the relationship, an organization has to be able to collect and analyze data points from the multiple impact points throughout [its] supply chain, both internally and externally, not just the ones that are easily visible and retrievable.
This is true. And, as the paper points out, if one does not understand the nature and quality of the relationship, one may never know that:
- a supplier delay, just communicated to one of your employees, will impact multiple customers,
- new international suppliers are being tapped to avoid single-sourcing risks, which might be causing quality risks, or
- foreign nationals are handling sensitive information prohibited by export control laws (and this last risk could put an officer of the company behind bars).
But automating the processing and classification of unstructured data is not going to reduce risk. In reality, it's going to increase risk. In a nutshell, here's why.
Let's say that external testing found lead paint on a children's toy. If you've identified "lead paint" as a risk and set up a rule that alerts someone in Quality Control that a review is required, then you might feel you've mitigated the risk, as the document will come in, be sent to quality control, see that lead levels are present and well beyond tolerance, and tell Procurement to refuse the shipment. Problem solved. Right? Wrong!
What happens if the test was performed by an individual who speaks English as a second language, who trusts that all misspellings will be handled by Microsoft Word, and who mistypes "lead paint" as "led pant" in the report. Both are legal English words, and if you turn grammar checking off, Microsoft Word will not complain. Is the automated classifier going to catch this? Not likely. While you may remember to program in one or two misspellings, like "led paint", or an abbreviation, like "ld pnt", you are not going to come up with every possible misspelling, and you're not going to want to because, if you include too many, you'll get a lot of false positives (and misclassifications). If this is a product where tolerance is 0, and the test results are not acted on in time, not only could you be stuck with a multi-million dollar inventory that can't be sold, but if a product makes it onto shelves, gets bought, and someone gets sick, that's a lawsuit that could cost more than what it cost to develop and manufacture the first batch of products.
Now, there's nothing wrong with deploying such technology to scan documents to look for documents of interest that should be reviewed, but it should not be the foundation of any risk management strategy. Good risk management entails identifying relevant risks and having a mechanism for anyone to report when a risk of interest may be materializing. Then someone knowledgeable about the risk reviews the situation and makes the call.