Venezia, Crema Catalana, and Trustworthy AI
I do believe that everyone should experience at least two of these three EU jewels once in their lifetimes; today I’m going to write about the third that’s also worth a closer look.
After an earlier post “state-of-the-art strikes again,” I had a request to write a little more about the just published Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self-assessment. It definitely does not roll off the tongue like Venezia or Crema Catalana, but, even though it is most definitely Euro-centric, if you care at all about trustworthy AI, you will appreciate their work and, if you’re smart, you’ll find a way to appropriate some of it too.
The self-assessment document referenced above is a follow-on to the foundational Ethics guidelines for trustworthy AI published in April 2019. Start with this document. It is well-organized, thorough, and readily understood without having to be a programmer or a mathematician. It lays out all of the societal, political, technical, etc. dimensions of Trustworthy AI and connects them to relevant EU jurisdictions, charters and regulations. What I mean by well-organized is that extracting the Euro-centric parts to adapt it to another jurisdiction or make it independent of any region is straightforward.
With a working framework for trustworthy AI under our belts, we’re ready to take advantage of the aforementioned Assessment List. Again, for a forum like this, I’m not going to summarize or rehash what is already put together so well. Follow the link above and read it at your leisure. Lets just jump right into some quick observations for whatever they may be worth.
ALTAI for self-assessment
The most salient point has already been made. The report is obviously useful for EU-facing concerns, but it can and should be used as a resource by anyone grappling with the complexity and ambiguity of AI ethics and trust. You could almost get away with replacing “GDPR” with any/all relevant privacy regulations and statutes that you care about and you’d be pretty close to done.
It’s not really clear what a passing grade would look like. You may answer “no” to some of the very reasonable guidance offered to improve transparency or minimize negative side-effects of poorly governed AI, but it is not clear what you’re expected to do to turn that NO into a YES or even how critical each NO might be. Here’s a sample question:
Did you align the AI system with relevant standards (e.g. ISO25, IEEE26) or widely adopted protocols for (daily) data management and governance?
In general, guidance on what the implications or potential call-to-action might be after the self-assessment would be helpful.
You can learn a lot about a person by the questions they ask versus the things that they declare. This pearl of wisdom certainly extends to the authors of these two reports: the High-Level Expert Group on AI (AI HLEG). Read through this assessment and you will get a pretty good outline of what’s likely to be coming down the regulatory chute soon enough.
The list of self-assessments is thorough and it is also lengthy. For small ISVs or consultancies (or any small business), spending time on each area raised might prove to be VERY COSTLY. Happily, a small company can do a lot of good; unfortunately, that also means they can inadvertently cause harm. How can these guidelines be modulated to avoid unnecessarily burdening small, innovative companies?
Lastly, I cannot help but notice that the term “state of the art” is used in the assessment report. Specifically (and strategically) the term can be found on pages 17 (Avoidance of Unfair Bias) and 21 (Risk Management). I talk a lot about the importance of this term in particular in an earlier post, Three words that will make or break your compliance programs. “State of the art” is a loaded term with special meaning (like the terms "reasonable" or "effective"). State of the art is calibrated by the high-water mark across the organizations in your supply chain – not, as one might presume from everyday use, with cutting edge R&D. In other words, get ready to add AI trustworthiness to your vendor/3rd party risk to assessments.
Bottom line, in a crowded space with lots of very smart people focused on these issues, the self-assessment is, to me, a standout worth at least a skim. (but it does need a catchier name)