Discussion:
Toward a web standard for XAI?
Paola Di Maio
2018-10-31 13:24:30 UTC
Permalink
Just wondering
https://www.w3.org/community/aikr/2018/10/31/towards-a-web-standard-for-explainable-ai/
Adam Sobieski
2018-11-01 07:35:45 UTC
Permalink
Artificial Intelligence Knowledge Representation Community Group,
Semantic Web Interest Group,
Paola Di Maio,

Artificial intelligence and machine learning systems could produce explanation and/or argumentation [1].

Deep learning models can be assembled by interconnecting components [2][3]. Sets of interconnected components can become interconnectable composite components. XAI [4] approaches should work for deep learning models assembled by interconnecting components. We can envision explanations and arguments, or generators for such, forming as deep learning models are assembled from components.

What do you think about XAI and deep learning models assembled by interconnecting components?


Best regards,
Adam Sobieski
http://www.phoster.com/contents/

[1] https://www.w3.org/community/argumentation/
[2] https://www.lobe.ai/
[3]

[4] https://www.darpa.mil/program/explainable-artificial-intelligence

From: Paola Di Maio<mailto:***@gmail.com>
Sent: Wednesday, October 31, 2018 9:31 AM
To: public-***@w3.org<mailto:public-***@w3.org>; semantic-web at W3C<mailto:semantic-***@w3c.org>
Subject: Toward a web standard for XAI?


Just wondering
https://www.w3.org/community/aikr/2018/10/31/towards-a-web-standard-for-explainable-ai/
Dave Raggett
2018-11-01 09:14:43 UTC
Permalink
It is certainly interesting, but I expect that there are larger opportunities if we look at opportunities to combine computational statistics with symbolic representations and reasoning. An example is vision, where deep learning works well if you have large training sets and the data you want to apply the trained network to reflects the same statistics as the training set. Unfortunately, that is often not the case, e.g. due to changes in lighting, weather.

Many species of animals are able to see very soon after birth and clearly aren’t reliant on huge training sets. Moreover, they are able to perceive objects that they have never seen before. This calls for a more sophisticated architecture than stochastic back propagation, one that can embody induction based upon commonalities and abduction for inferring the presence of a previously learned object from cues. Moreover, for survival, animals need to recognise behaviour and to distinguish predators from others. This means learning to spot patterns of behaviour and to associate then with a class of things that have been learned by induction.
Post by Adam Sobieski
Artificial Intelligence Knowledge Representation Community Group,
Semantic Web Interest Group,
Paola Di Maio,
Artificial intelligence and machine learning systems could produce explanation and/or argumentation [1].
Deep learning models can be assembled by interconnecting components [2][3]. Sets of interconnected components can become interconnectable composite components. XAI [4] approaches should work for deep learning models assembled by interconnecting components. We can envision explanations and arguments, or generators for such, forming as deep learning models are assembled from components.
What do you think about XAI and deep learning models assembled by interconnecting components?
Best regards,
Adam Sobieski
http://www.phoster.com/contents/ <http://www.phoster.com/contents/>
[1] https://www.w3.org/community/argumentation/ <https://www.w3.org/community/argumentation/>
[2] https://www.lobe.ai/ <https://www.lobe.ai/>
[3] http://youtu.be/IN69suHxS8w http://youtu.be/IN69suHxS8w
[4] https://www.darpa.mil/program/explainable-artificial-intelligence <https://www.darpa.mil/program/explainable-artificial-intelligence>
Dave Raggett <***@w3.org> http://www.w3.org/People/Raggett
W3C Data Activity Lead & W3C champion for the Web of things
Adam Sobieski
2018-11-01 19:29:29 UTC
Permalink
Dave,

Thank you for the interesting points on combining computational statistics with symbolic representations and reasoning.

With respect to species of animals and how they can sense, perceive and exhibit behavior so proximate to birth, one might find interesting the topics of evolution, developmental neurogenesis [1], instinct [2] and ethology [3].

Brainstorming with respect to XAI, there might also be value in considering computer simulation, event stream processing and complex event processing. That is, we can consider generating events as connectionist systems compute so that other software systems can process and interpret the events into descriptions of what the connectionist systems are doing instantaneously or over the course of time.

With respect to considering new formats and Web standards for XAI explanations and arguments, one might find interesting natural language generation as pertaining to explanation [4] and argumentation [5][6].


Best regards,
Adam

[1] https://en.wikipedia.org/wiki/Neurogenesis#Developmental_neurogenesis
[2] https://en.wikipedia.org/wiki/Instinct
[3] https://en.wikipedia.org/wiki/Ethology

[4] http://www.phoster.com/mixed-initiative-dialogue-systems-explanation-mechanistic-reasoning-mental-simulation-and-imagination/generating-explanations/
[5] http://www.phoster.com/linguistics/argumentation/
[6] Reed, Chris Anthony. "Generating arguments in natural language." PhD diss., University of London, 1998.

________________________________
From: Dave Raggett <***@w3.org>
Sent: Thursday, November 1, 2018 5:14:43 AM
To: Adam Sobieski
Cc: ***@googlemail.com; public-***@w3.org; semantic-web at W3C
Subject: Re: Toward a web standard for XAI?

It is certainly interesting, but I expect that there are larger opportunities if we look at opportunities to combine computational statistics with symbolic representations and reasoning. An example is vision, where deep learning works well if you have large training sets and the data you want to apply the trained network to reflects the same statistics as the training set. Unfortunately, that is often not the case, e.g. due to changes in lighting, weather.

Many species of animals are able to see very soon after birth and clearly aren’t reliant on huge training sets. Moreover, they are able to perceive objects that they have never seen before. This calls for a more sophisticated architecture than stochastic back propagation, one that can embody induction based upon commonalities and abduction for inferring the presence of a previously learned object from cues. Moreover, for survival, animals need to recognise behaviour and to distinguish predators from others. This means learning to spot patterns of behaviour and to associate then with a class of things that have been learned by induction.


On 1 Nov 2018, at 07:35, Adam Sobieski <***@hotmail.com<mailto:***@hotmail.com>> wrote:

Artificial Intelligence Knowledge Representation Community Group,
Semantic Web Interest Group,
Paola Di Maio,

Artificial intelligence and machine learning systems could produce explanation and/or argumentation [1].

Deep learning models can be assembled by interconnecting components [2][3]. Sets of interconnected components can become interconnectable composite components. XAI [4] approaches should work for deep learning models assembled by interconnecting components. We can envision explanations and arguments, or generators for such, forming as deep learning models are assembled from components.

What do you think about XAI and deep learning models assembled by interconnecting components?


Best regards,
Adam Sobieski
http://www.phoster.com/contents/

[1] https://www.w3.org/community/argumentation/
[2] https://www.lobe.ai/
[3] http://youtu.be/IN69suHxS8w
[4] https://www.darpa.mil/program/explainable-artificial-intelligence


Dave Raggett <***@w3.org<mailto:***@w3.org>> http://www.w3.org/People/Raggett
W3C Data Activity Lead & W3C champion for the Web of things
Floriano Scioscia
2018-11-02 12:00:33 UTC
Permalink
My research group has proposed a machine learning classification approach exploiting automatic ontology-based annotation of input data to merge computational statistics with non-standard reasoning.

A paper has been accepted in the Semantic Web journal and is in press [1]. Pre-press version can be found in [2].

Basically, the typical classification problem of ML is treated as resource discovery by means of semantic matchmaking. Outputs of classification are endowed with machine-understandable OWL descriptions, while the adopted reasoning procedures for matchmaking allow logic-based result explanation.

In our early tests classification performance is not so bad w.r.t. the state of the art, but both models and outcomes are explainable: considering the reference questions in the DARPA XAI initiative [3], our approach fully addresses "Why did you do that?", "Why not something else?" and "How do I correct an error?/Why did you err?".

We are currently working to make the approach more amenable to large distributed sensors network/IoT scenarios and to improve both classification and computational performance.

Best regards,
Floriano

[1] https://content.iospress.com/articles/semantic-web/sw314
[2] http://www.semantic-web-journal.net/content/machine-learning-internet-things-semantic-enhanced-approach-1
[3] https://www.darpa.mil/program/explainable-artificial-intelligence

--
Floriano Scioscia, Ph.D.
Information Systems<http://sisinflab.poliba.it/> Research Group
Department of Electrical and Information Engineering<http://dei.poliba.it/>
Polytechnic University of Bari<http://www.poliba.it/>
Home page: http://sisinflab.poliba.it/scioscia/
Informativa Privacy - Ai sensi del Regolamento (UE) 2016/679 si precisa che le informazioni contenute in questo messaggio sono riservate e ad uso esclusivo del destinatario. Qualora il messaggio in parola Le fosse pervenuto per errore, La preghiamo di eliminarlo senza copiarlo e di non inoltrarlo a terzi, dandocene gentilmente comunicazione. Grazie. Privacy Information - This message, for the Regulation (UE) 2016/679, may contain confidential and/or privileged information. If you are not the addressee or authorized to receive this for the addressee, you must not use, copy, disclose or take any action based on this message or any information herein. If you have received this message in error, please advise the sender immediately by reply e-mail and delete this message. Thank you for your cooperation.
Paola Di Maio
2018-11-13 02:59:16 UTC
Permalink
Dear Adam
thanks and sorry for taking time to reply.
Indeed triggered some thinking
In the process of doing so,irealised whatever we come up with has to match
the web stack, and then realised that we do not have a stack for the
distributed web yet, as such
Is this what you are thinking, Adam Sobieski, please share more
sounds like in the right direction
PDM
Post by Adam Sobieski
Artificial intelligence and machine learning systems could produce
explanation and/or argumentation [1].
Deep learning models can be assembled by interconnecting components
[2][3]. Sets of interconnected components can become interconnectable
composite components. XAI [4] approaches should work for deep learning
models assembled by interconnecting components. We can envision
explanations and arguments, or generators for such, forming as deep
learning models are assembled from components.
What do you think about XAI and deep learning models assembled by
interconnecting components?
Best regards,
Adam Sobieski
http://www.phoster.com/contents/
[1] https://www.w3.org/community/argumentation/
[2] https://www.lobe.ai/
[3] http://youtu.be/IN69suHxS8w
[4] https://www.darpa.mil/program/explainable-artificial-intelligence
*Sent: *Wednesday, October 31, 2018 9:31 AM
*Subject: *Toward a web standard for XAI?
Just wondering
https://www.w3.org/community/aikr/2018/10/31/towards-a-web-standard-for-explainable-ai/
Adam Sobieski
2018-11-17 05:40:30 UTC
Permalink
Paola Di Maio,

When considering explanations of artificial intelligence systems’ behaviors or outputs or when considering arguments that artificial intelligence systems’ behaviors or outputs are correct or the best possible, we can consider diagrammatic, recursive, component-based approaches to the design and representation of models and systems (e.g. Lobe). For such approaches, we can consider simple components, interconnections between components, and composite components which are comprised of interconnected subcomponents. For such approaches, we can also consider that components can have settings, that components can be configurable.

As we consider recursive representations, a question is which level of abstraction should one use when generating an explanation – when composite components can be double-clicked upon to reveal yet more interconnected components? Which composite components should one utilize in an explanation or argument and which should be zoomed in upon and to which level of detail? We can generalize with respect to generating explanations and arguments from recursive models of: (1) mathematical proofs, (2) computer programs, and (3) component-based systems. We can consider a number of topics for all three cases: explanation planning, context modeling, task modeling, user modeling, cognitive load modeling, attention modeling, relevance modeling and adaptive explanation.

Another topic important to XAI is that some components are trained on data, that the behavior of some components, simple or composite, is dependent upon training data, training procedures or experiences in environments. Brainstorming, we can consider that components or systems can produce data, e.g. event logs, when training or forming experiences in environments, such that the produced data can be of use to generating explanations and arguments for artificial intelligence systems’ behaviors or outputs. Pertinent topics include contextual summarization and narrative.

XAI topics are interesting; I’m enjoying the discussion. I hope that these theoretical topics can be of some use to developing new standards.


Best regards,
Adam


Schiller, Marvin, and Christoph Benzmüller. "Presenting proofs with adapted granularity." In Annual Conference on Artificial Intelligence, pp. 289-297. Springer, Berlin, Heidelberg, 2009.

Cheong, Yun-Gyung, and Robert Michael Young. "A Framework for Summarizing Game Experiences as Narratives." In AIIDE, pp. 106-108. 2006.

From: Paola Di Maio<mailto:***@gmail.com>
Sent: Monday, November 12, 2018 9:59 PM
Cc: public-***@w3.org<mailto:public-***@w3.org>; semantic-web at W3C<mailto:semantic-***@w3c.org>
Subject: Re: Toward a web standard for XAI?

Dear Adam
thanks and sorry for taking time to reply.
Indeed triggered some thinking
In the process of doing so,irealised whatever we come up with has to match the web stack, and then realised that we do not have a stack for the distributed web yet, as such
Is this what you are thinking, Adam Sobieski, please share more
sounds like in the right direction
PDM



Artificial intelligence and machine learning systems could produce explanation and/or argumentation [1].

Deep learning models can be assembled by interconnecting components [2][3]. Sets of interconnected components can become interconnectable composite components. XAI [4] approaches should work for deep learning models assembled by interconnecting components. We can envision explanations and arguments, or generators for such, forming as deep learning models are assembled from components.

What do you think about XAI and deep learning models assembled by interconnecting components?


Best regards,
Adam Sobieski
http://www.phoster.com/contents/

[1] https://www.w3.org/community/argumentation/
[2] https://www.lobe.ai/
[3] http://youtu.be/IN69suHxS8w
[4] https://www.darpa.mil/program/explainable-artificial-intelligence

From: Paola Di Maio<mailto:***@gmail.com>
Sent: Wednesday, October 31, 2018 9:31 AM
To: public-***@w3.org<mailto:public-***@w3.org>; semantic-web at W3C<mailto:semantic-***@w3c.org>
Subject: Toward a web standard for XAI?


Just wondering
https://www.w3.org/community/aikr/2018/10/31/towards-a-web-standard-for-explainable-ai/
Adrian Walker
2018-11-17 14:41:26 UTC
Permalink
Hi Adam & All,

You may be interested in the way explanations are generated in the platform
online at the site below.

First is a headline. Clicking on that gets the first layer of detail.
Clicking on that....well, you see the idea.

This is done in a subject-independent way, by analyzing the underlying call
graph.

Cheers, -- Adrian

Adrian Walker
Executable English LLC
San Jose, CA, USA
860 830 2085
https://www.executable-english.com
Post by Adam Sobieski
Paola Di Maio,
When considering explanations of artificial intelligence systems’
behaviors or outputs or when considering arguments that artificial
intelligence systems’ behaviors or outputs are correct or the best
possible, we can consider diagrammatic, recursive, component-based
approaches to the design and representation of models and systems (e.g.
Lobe). For such approaches, we can consider simple components,
interconnections between components, and composite components which are
comprised of interconnected subcomponents. For such approaches, we can also
consider that components can have settings, that components can be
configurable.
As we consider recursive representations, a question is which level of
abstraction should one use when generating an explanation – when composite
components can be double-clicked upon to reveal yet more interconnected
components? Which composite components should one utilize in an explanation
or argument and which should be zoomed in upon and to which level of
detail? We can generalize with respect to generating explanations and
arguments from recursive models of: (1) mathematical proofs, (2) computer
programs, and (3) component-based systems. We can consider a number of
topics for all three cases: explanation planning, context modeling, task
modeling, user modeling, cognitive load modeling, attention modeling,
relevance modeling and adaptive explanation.
Another topic important to XAI is that some components are trained on
data, that the behavior of some components, simple or composite, is
dependent upon training data, training procedures or experiences in
environments. Brainstorming, we can consider that components or systems can
produce data, e.g. event logs, when training or forming experiences in
environments, such that the produced data can be of use to generating
explanations and arguments for artificial intelligence systems’ behaviors
or outputs. Pertinent topics include contextual summarization and narrative.
XAI topics are interesting; I’m enjoying the discussion. I hope that these
theoretical topics can be of some use to developing new standards.
Best regards,
Adam
Schiller, Marvin, and Christoph BenzmÃŒller. "Presenting proofs with
adapted granularity." In Annual Conference on Artificial Intelligence, pp.
289-297. Springer, Berlin, Heidelberg, 2009.
Cheong, Yun-Gyung, and Robert Michael Young. "A Framework for Summarizing
Game Experiences as Narratives." In AIIDE, pp. 106-108. 2006.
*Sent: *Monday, November 12, 2018 9:59 PM
*Subject: *Re: Toward a web standard for XAI?
Dear Adam
thanks and sorry for taking time to reply.
Indeed triggered some thinking
In the process of doing so,irealised whatever we come up with has to match
the web stack, and then realised that we do not have a stack for the
distributed web yet, as such
Is this what you are thinking, Adam Sobieski, please share more
sounds like in the right direction
PDM
Artificial intelligence and machine learning systems could produce
explanation and/or argumentation [1].
Deep learning models can be assembled by interconnecting components
[2][3]. Sets of interconnected components can become interconnectable
composite components. XAI [4] approaches should work for deep learning
models assembled by interconnecting components. We can envision
explanations and arguments, or generators for such, forming as deep
learning models are assembled from components.
What do you think about XAI and deep learning models assembled by
interconnecting components?
Best regards,
Adam Sobieski
http://www.phoster.com/contents/
[1] https://www.w3.org/community/argumentation/
[2] https://www.lobe.ai/
[3] http://youtu.be/IN69suHxS8w
[4] https://www.darpa.mil/program/explainable-artificial-intelligence
*Sent: *Wednesday, October 31, 2018 9:31 AM
*Subject: *Toward a web standard for XAI?
Just wondering
https://www.w3.org/community/aikr/2018/10/31/towards-a-web-standard-for-explainable-ai/
ProjectParadigm-ICT-Program
2018-11-18 17:36:10 UTC
Permalink
When considering explanations of artificialintelligence systems’ behaviors or outputs or when consideringarguments that artificial intelligence systems’ behaviors oroutputs are correct or the best possible, we can considerdiagrammatic, recursive, component-based approaches to the design andrepresentation of models and systems (e.g. Lobe). For suchapproaches, we can consider simple components, interconnectionsbetween components, and composite components which are comprised ofinterconnected subcomponents. For such approaches, we can alsoconsider that components can have settings, that components can beconfigurable.

This recursiveness in what are very obviously category representations can be formalized by higher-dimensional categories in category theory.   As we consider recursive representations, a question is which level of abstraction should one use when generating an explanation – when composite components can be double-clicked upon to reveal yet more interconnected components? Which composite components should one utilize in an explanation or argument and which should be zoomed in upon and to which level of detail? We can generalize with respect to generating explanations and arguments from recursive models of: (1) mathematical proofs, (2) computer programs, and (3) component-based systems. We can consider a number of topics for all three cases: explanation planning, context modeling, task modeling, user modeling, cognitive load modeling, attention modeling, relevance modeling and adaptive explanation.

This can be done using a category graph based programming language where recursiveness is embedded in the syntax structure and where at the bottom of the parsing tree calls to context specific programming languages are made to recursively determined context specific components.   Another topic important to XAI is that some components are trained on data, that the behavior of some components, simple or composite, is dependent upon training data, training procedures or experiences in environments. Brainstorming, we can consider that components or systems can produce data, e.g. event logs, when training or forming experiences in environments, such that the produced data can be of use to generating explanations and arguments for artificial intelligence systems’ behaviors or outputs. Pertinent topics include contextual summarization and narrative.  
Context can be made explicit by assigning categories.

XAI topics are interesting; I’m enjoying the discussion. I hope that these theoretical topics can be of some use to developing new standards.  
 Milton Ponson
GSM: +297 747 8280
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean
Project Paradigm: Bringing the ICT tools for sustainable development to all stakeholders worldwide through collaborative research on applied mathematics, advanced modeling, software and standards development

On Saturday, November 17, 2018 1:41 AM, Adam Sobieski <***@hotmail.com> wrote:


#yiv4242429214 -- filtered {panose-1:2 4 5 3 5 4 6 3 2 4;}#yiv4242429214 filtered {font-family:Calibri;panose-1:2 15 5 2 2 2 4 3 2 4;}#yiv4242429214 p.yiv4242429214MsoNormal, #yiv4242429214 li.yiv4242429214MsoNormal, #yiv4242429214 div.yiv4242429214MsoNormal {margin:0in;margin-bottom:.0001pt;font-size:11.0pt;font-family:sans-serif;}#yiv4242429214 a:link, #yiv4242429214 span.yiv4242429214MsoHyperlink {color:blue;text-decoration:underline;}#yiv4242429214 a:visited, #yiv4242429214 span.yiv4242429214MsoHyperlinkFollowed {color:#954F72;text-decoration:underline;}#yiv4242429214 .yiv4242429214MsoChpDefault {}#yiv4242429214 filtered {margin:1.0in 1.0in 1.0in 1.0in;}#yiv4242429214 div.yiv4242429214WordSection1 {}#yiv4242429214 Paola Di Maio,   When considering explanations of artificial intelligence systems’ behaviors or outputs or when considering arguments that artificial intelligence systems’ behaviors or outputs are correct or the best possible, we can consider diagrammatic, recursive, component-based approaches to the design and representation of models and systems (e.g. Lobe). For such approaches, we can consider simple components, interconnections between components, and composite components which are comprised of interconnected subcomponents. For such approaches, we can also consider that components can have settings, that components can be configurable.   As we consider recursive representations, a question is which level of abstraction should one use when generating an explanation – when composite components can be double-clicked upon to reveal yet more interconnected components? Which composite components should one utilize in an explanation or argument and which should be zoomed in upon and to which level of detail? We can generalize with respect to generating explanations and arguments from recursive models of: (1) mathematical proofs, (2) computer programs, and (3) component-based systems. We can consider a number of topics for all three cases: explanation planning, context modeling, task modeling, user modeling, cognitive load modeling, attention modeling, relevance modeling and adaptive explanation.   Another topic important to XAI is that some components are trained on data, that the behavior of some components, simple or composite, is dependent upon training data, training procedures or experiences in environments. Brainstorming, we can consider that components or systems can produce data, e.g. event logs, when training or forming experiences in environments, such that the produced data can be of use to generating explanations and arguments for artificial intelligence systems’ behaviors or outputs. Pertinent topics include contextual summarization and narrative.   XAI topics are interesting; I’m enjoying the discussion. I hope that these theoretical topics can be of some use to developing new standards.     Best regards,Adam     Schiller, Marvin, and Christoph BenzmÃŒller. "Presenting proofs with adapted granularity." In Annual Conference on Artificial Intelligence, pp. 289-297. Springer, Berlin, Heidelberg, 2009.   Cheong, Yun-Gyung, and Robert Michael Young. "A Framework for Summarizing Game Experiences as Narratives." In AIIDE, pp. 106-108. 2006.   From: Paola Di Maio
Sent: Monday, November 12, 2018 9:59 PM
Cc: public-***@w3.org; semantic-web at W3C
Subject: Re: Toward a web standard for XAI?   Dear Adam thanks and sorry for taking time to reply.  Indeed triggered some thinking In the process of doing so,irealised whatever we come up with has to match the web stack, and then realised that we do not have a stack for the distributed web yet, as such Is this what you are thinking, Adam Sobieski, please share more  sounds like in the right direction PDM    
 Artificial intelligence and machine learning systems could produce explanation and/or argumentation [1]. Deep learning models can be assembled by interconnecting components [2][3]. Sets of interconnected components can become interconnectable composite components. XAI [4] approaches should work for deep learning models assembled by interconnecting components. We can envision explanations and arguments, or generators for such, forming as deep learning models are assembled from components. What do you think about XAI and deep learning models assembled by interconnecting components?  Best regards,Adam Sobieskihttp://www.phoster.com/contents/ [1] https://www.w3.org/community/argumentation/[2] https://www.lobe.ai/[3] http://youtu.be/IN69suHxS8w https://www.darpa.mil/program/explainable-artificial-intelligence From: Paola Di Maio
Sent: Wednesday, October 31, 2018 9:31 AM
To: public-***@w3.org;semantic-web at W3C
Subject: Toward a web standard for XAI? 
Just wonderinghttps://www.w3.org/community/aikr/2018/10/31/towards-a-web-standard-for-explainable-ai/
   
Paola Di Maio
2018-11-22 00:04:36 UTC
Permalink
Milton
I have been thinking
There have beeb (long) discussions on category theory before.
Of course it has merits and useful applications, but how to leverage
its power without falling prey of its known fallacies?
I dont know if there is any recent good enough reference work to
advance the science
But maybe this would be also a good opportunity to work on that, since
we are throwing the entire universe of discourse into
the cauldron
How to overcome ihe obvious flaws .of category theory (I think the
argument is well made in Lakoff';s Women Fire and other dangerous
things)
this blog post summarises some of the points
https://jeremykun.com/2013/04/16/categories-whats-the-point/

In fact, this should be my next life mission.

Dr Paola Di Maio
Artificial Intelligence Knowledge Representation
Special Issue, Systems MDPI
*Cfp accepting manuscripts
A bit about me



On Mon, Nov 19, 2018 at 1:36 AM ProjectParadigm-ICT-Program
When considering explanations of artificial intelligence systems’ behaviors or outputs or when considering arguments that artificial intelligence systems’ behaviors or outputs are correct or the best possible, we can consider diagrammatic, recursive, component-based approaches to the design and representation of models and systems (e.g. Lobe). For such approaches, we can consider simple components, interconnections between components, and composite components which are comprised of interconnected subcomponents. For such approaches, we can also consider that components can have settings, that components can be configurable.
This recursiveness in what are very obviously category representations can be formalized by higher-dimensional categories in category theory.
As we consider recursive representations, a question is which level of abstraction should one use when generating an explanation – when composite components can be double-clicked upon to reveal yet more interconnected components? Which composite components should one utilize in an explanation or argument and which should be zoomed in upon and to which level of detail? We can generalize with respect to generating explanations and arguments from recursive models of: (1) mathematical proofs, (2) computer programs, and (3) component-based systems. We can consider a number of topics for all three cases: explanation planning, context modeling, task modeling, user modeling, cognitive load modeling, attention modeling, relevance modeling and adaptive explanation.
This can be done using a category graph based programming language where recursiveness is embedded in the syntax structure and where at the bottom of the parsing tree calls to context specific programming languages are made to recursively determined context specific components.
Another topic important to XAI is that some components are trained on data, that the behavior of some components, simple or composite, is dependent upon training data, training procedures or experiences in environments. Brainstorming, we can consider that components or systems can produce data, e.g. event logs, when training or forming experiences in environments, such that the produced data can be of use to generating explanations and arguments for artificial intelligence systems’ behaviors or outputs. Pertinent topics include contextual summarization and narrative.
Context can be made explicit by assigning categories.
XAI topics are interesting; I’m enjoying the discussion. I hope that these theoretical topics can be of some use to developing new standards.
Milton Ponson
GSM: +297 747 8280
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean
Project Paradigm: Bringing the ICT tools for sustainable development to all stakeholders worldwide through collaborative research on applied mathematics, advanced modeling, software and standards development
Paola Di Maio,
When considering explanations of artificial intelligence systems’ behaviors or outputs or when considering arguments that artificial intelligence systems’ behaviors or outputs are correct or the best possible, we can consider diagrammatic, recursive, component-based approaches to the design and representation of models and systems (e.g. Lobe). For such approaches, we can consider simple components, interconnections between components, and composite components which are comprised of interconnected subcomponents. For such approaches, we can also consider that components can have settings, that components can be configurable.
As we consider recursive representations, a question is which level of abstraction should one use when generating an explanation – when composite components can be double-clicked upon to reveal yet more interconnected components? Which composite components should one utilize in an explanation or argument and which should be zoomed in upon and to which level of detail? We can generalize with respect to generating explanations and arguments from recursive models of: (1) mathematical proofs, (2) computer programs, and (3) component-based systems. We can consider a number of topics for all three cases: explanation planning, context modeling, task modeling, user modeling, cognitive load modeling, attention modeling, relevance modeling and adaptive explanation.
Another topic important to XAI is that some components are trained on data, that the behavior of some components, simple or composite, is dependent upon training data, training procedures or experiences in environments. Brainstorming, we can consider that components or systems can produce data, e.g. event logs, when training or forming experiences in environments, such that the produced data can be of use to generating explanations and arguments for artificial intelligence systems’ behaviors or outputs. Pertinent topics include contextual summarization and narrative.
XAI topics are interesting; I’m enjoying the discussion. I hope that these theoretical topics can be of some use to developing new standards.
Best regards,
Adam
Schiller, Marvin, and Christoph Benzmüller. "Presenting proofs with adapted granularity." In Annual Conference on Artificial Intelligence, pp. 289-297. Springer, Berlin, Heidelberg, 2009.
Cheong, Yun-Gyung, and Robert Michael Young. "A Framework for Summarizing Game Experiences as Narratives." In AIIDE, pp. 106-108. 2006.
From: Paola Di Maio
Sent: Monday, November 12, 2018 9:59 PM
Subject: Re: Toward a web standard for XAI?
Dear Adam
thanks and sorry for taking time to reply.
Indeed triggered some thinking
In the process of doing so,irealised whatever we come up with has to match the web stack, and then realised that we do not have a stack for the distributed web yet, as such
Is this what you are thinking, Adam Sobieski, please share more
sounds like in the right direction
PDM
Artificial intelligence and machine learning systems could produce explanation and/or argumentation [1].
Deep learning models can be assembled by interconnecting components [2][3]. Sets of interconnected components can become interconnectable composite components. XAI [4] approaches should work for deep learning models assembled by interconnecting components. We can envision explanations and arguments, or generators for such, forming as deep learning models are assembled from components.
What do you think about XAI and deep learning models assembled by interconnecting components?
Best regards,
Adam Sobieski
http://www.phoster.com/contents/
[1] https://www.w3.org/community/argumentation/
[2] https://www.lobe.ai/
[3] http://youtu.be/IN69suHxS8w
[4] https://www.darpa.mil/program/explainable-artificial-intelligence
From: Paola Di Maio
Sent: Wednesday, October 31, 2018 9:31 AM
Subject: Toward a web standard for XAI?
Just wondering
https://www.w3.org/community/aikr/2018/10/31/towards-a-web-standard-for-explainable-ai/
ProjectParadigm-ICT-Program
2018-11-22 14:05:15 UTC
Permalink
Glad you asked.
I too was skeptical initially, and yes there may be some flaws in category theory, but convergence of sting theory, quantum theory, computability issues, software engineering, generalized frameworks for discussing all forms of logic and underlying calculi lead to one common ground: the use of category theory.
Even Cognitive sciences and Biologically Inspired Cognitive Architectures draw heavily from category theory.
The current state of category theory has not yet considered unifying many fields, but string theorists and quantum physicists are actually the ones asking the pertinent questions, the answers of which point to a common ground. Milton Ponson
GSM: +297 747 8280
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean
Project Paradigm: Bringing the ICT tools for sustainable development to all stakeholders worldwide through collaborative research on applied mathematics, advanced modeling, software and standards development

On Wednesday, November 21, 2018 8:05 PM, Paola Di Maio <***@gmail.com> wrote:


Milton
I have been thinking
There have beeb (long) discussions on category theory before.
Of course it has merits and useful applications, but how to leverage
its power without falling prey of its known fallacies?
I dont know if there is any recent good enough reference work to
advance the science
But maybe this would be also a good opportunity to work on that, since
we are throwing the entire universe of discourse into
the cauldron
How to overcome ihe obvious flaws .of category theory (I think the
argument is well made in Lakoff';s Women Fire and other dangerous
things)
this blog post summarises some of the points
https://jeremykun.com/2013/04/16/categories-whats-the-point/

In fact, this should be my next life mission.

Dr Paola Di Maio
Artificial Intelligence Knowledge Representation
Special Issue, Systems MDPI
*Cfp  accepting manuscripts
A bit about me



On Mon, Nov 19, 2018 at 1:36 AM ProjectParadigm-ICT-Program
Post by ProjectParadigm-ICT-Program
When considering explanations of artificial intelligence systems’ behaviors or outputs or when considering arguments that artificial intelligence systems’ behaviors or outputs are correct or the best possible, we can consider diagrammatic, recursive, component-based approaches to the design and representation of models and systems (e.g. Lobe). For such approaches, we can consider simple components, interconnections between components, and composite components which are comprised of interconnected subcomponents. For such approaches, we can also consider that components can have settings, that components can be configurable.
This recursiveness in what are very obviously category representations can be formalized by higher-dimensional categories in category theory.
As we consider recursive representations, a question is which level of abstraction should one use when generating an explanation – when composite components can be double-clicked upon to reveal yet more interconnected components? Which composite components should one utilize in an explanation or argument and which should be zoomed in upon and to which level of detail? We can generalize with respect to generating explanations and arguments from recursive models of: (1) mathematical proofs, (2) computer programs, and (3) component-based systems. We can consider a number of topics for all three cases: explanation planning, context modeling, task modeling, user modeling, cognitive load modeling, attention modeling, relevance modeling and adaptive explanation.
This can be done using a category graph based programming language where recursiveness is embedded in the syntax structure and where at the bottom of the parsing tree calls to context specific programming languages are made to recursively determined context specific components.
Another topic important to XAI is that some components are trained on data, that the behavior of some components, simple or composite, is dependent upon training data, training procedures or experiences in environments. Brainstorming, we can consider that components or systems can produce data, e.g. event logs, when training or forming experiences in environments, such that the produced data can be of use to generating explanations and arguments for artificial intelligence systems’ behaviors or outputs. Pertinent topics include contextual summarization and narrative.
Context can be made explicit by assigning categories.
XAI topics are interesting; I’m enjoying the discussion. I hope that these theoretical topics can be of some use to developing new standards.
Milton Ponson
GSM: +297 747 8280
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean
Project Paradigm: Bringing the ICT tools for sustainable development to all stakeholders worldwide through collaborative research on applied mathematics, advanced modeling, software and standards development
Paola Di Maio,
When considering explanations of artificial intelligence systems’ behaviors or outputs or when considering arguments that artificial intelligence systems’ behaviors or outputs are correct or the best possible, we can consider diagrammatic, recursive, component-based approaches to the design and representation of models and systems (e.g. Lobe). For such approaches, we can consider simple components, interconnections between components, and composite components which are comprised of interconnected subcomponents. For such approaches, we can also consider that components can have settings, that components can be configurable.
As we consider recursive representations, a question is which level of abstraction should one use when generating an explanation – when composite components can be double-clicked upon to reveal yet more interconnected components? Which composite components should one utilize in an explanation or argument and which should be zoomed in upon and to which level of detail? We can generalize with respect to generating explanations and arguments from recursive models of: (1) mathematical proofs, (2) computer programs, and (3) component-based systems. We can consider a number of topics for all three cases: explanation planning, context modeling, task modeling, user modeling, cognitive load modeling, attention modeling, relevance modeling and adaptive explanation.
Another topic important to XAI is that some components are trained on data, that the behavior of some components, simple or composite, is dependent upon training data, training procedures or experiences in environments. Brainstorming, we can consider that components or systems can produce data, e.g. event logs, when training or forming experiences in environments, such that the produced data can be of use to generating explanations and arguments for artificial intelligence systems’ behaviors or outputs. Pertinent topics include contextual summarization and narrative.
XAI topics are interesting; I’m enjoying the discussion. I hope that these theoretical topics can be of some use to developing new standards.
Best regards,
Adam
Schiller, Marvin, and Christoph BenzmÃŒller. "Presenting proofs with adapted granularity." In Annual Conference on Artificial Intelligence, pp. 289-297. Springer, Berlin, Heidelberg, 2009.
Cheong, Yun-Gyung, and Robert Michael Young. "A Framework for Summarizing Game Experiences as Narratives." In AIIDE, pp. 106-108. 2006.
From: Paola Di Maio
Sent: Monday, November 12, 2018 9:59 PM
Subject: Re: Toward a web standard for XAI?
Dear Adam
thanks and sorry for taking time to reply.
Indeed triggered some thinking
In the process of doing so,irealised whatever we come up with has to match the web stack, and then realised that we do not have a stack for the distributed web yet, as such
Is this what you are thinking, Adam Sobieski, please share more
sounds like in the right direction
PDM
Artificial intelligence and machine learning systems could produce explanation and/or argumentation [1].
Deep learning models can be assembled by interconnecting components [2][3]. Sets of interconnected components can become interconnectable composite components. XAI [4] approaches should work for deep learning models assembled by interconnecting components. We can envision explanations and arguments, or generators for such, forming as deep learning models are assembled from components.
What do you think about XAI and deep learning models assembled by interconnecting components?
Best regards,
Adam Sobieski
http://www.phoster.com/contents/
[1] https://www.w3.org/community/argumentation/
[2] https://www.lobe.ai/
[3] http://youtu.be/IN69suHxS8w
[4] https://www.darpa.mil/program/explainable-artificial-intelligence
From: Paola Di Maio
Sent: Wednesday, October 31, 2018 9:31 AM
Subject: Toward a web standard for XAI?
Just wondering
https://www.w3.org/community/aikr/2018/10/31/towards-a-web-standard-for-explainable-ai/
Paola Di Maio
2018-11-23 03:07:33 UTC
Permalink
I suppose that category theory , with its advantages and liimitations,
could well correspond to limitation of human and machine intelligence,
as long as we are aware of its limitations and possible distortions in
reasoning,
that could lead to distortions in decision making, are addressed by
complementary approaches
Post by ProjectParadigm-ICT-Program
I too was skeptical initially, and yes there may be some flaws in category theory, but convergence of sting theory, quantum theory, computability issues, software engineering, generalized frameworks for discussing all forms of logic and underlying calculi lead to one common ground: the use of category theory.
any pointers? if category theory and its flaws in knowledge representation (the gap beetween the reality and the abstraction ?) is all we have, then I am not surprising everyone is using it
Even Cognitive sciences and Biologically Inspired Cognitive Architectures draw heavily from category theory.
that's where the limitations of human and machien cognition come from?
Post by ProjectParadigm-ICT-Program
The current state of category theory has not yet considered unifying many fields, but string theorists and quantum physicists are actually the ones asking the pertinent questions, the answers of which point to a common ground.
I am familiar, to some extent, with the common ground <g>
I d love to see a unifying category of everything, sure it will make us laugh
Post by ProjectParadigm-ICT-Program
Milton
I have been thinking
There have beeb (long) discussions on category theory before.
Of course it has merits and useful applications, but how to leverage
its power without falling prey of its known fallacies?
I dont know if there is any recent good enough reference work to
advance the science
But maybe this would be also a good opportunity to work on that, since
we are throwing the entire universe of discourse into
the cauldron
How to overcome ihe obvious flaws .of category theory (I think the
argument is well made in Lakoff';s Women Fire and other dangerous
things)
this blog post summarises some of the points
https://jeremykun.com/2013/04/16/categories-whats-the-point/
In fact, this should be my next life mission.
Dr Paola Di Maio
Artificial Intelligence Knowledge Representation
Special Issue, Systems MDPI
*Cfp accepting manuscripts
A bit about me
On Mon, Nov 19, 2018 at 1:36 AM ProjectParadigm-ICT-Program
When considering explanations of artificial intelligence systems’ behaviors or outputs or when considering arguments that artificial intelligence systems’ behaviors or outputs are correct or the best possible, we can consider diagrammatic, recursive, component-based approaches to the design and representation of models and systems (e.g. Lobe). For such approaches, we can consider simple components, interconnections between components, and composite components which are comprised of interconnected subcomponents. For such approaches, we can also consider that components can have settings, that components can be configurable.
This recursiveness in what are very obviously category representations can be formalized by higher-dimensional categories in category theory.
As we consider recursive representations, a question is which level of abstraction should one use when generating an explanation – when composite components can be double-clicked upon to reveal yet more interconnected components? Which composite components should one utilize in an explanation or argument and which should be zoomed in upon and to which level of detail? We can generalize with respect to generating explanations and arguments from recursive models of: (1) mathematical proofs, (2) computer programs, and (3) component-based systems. We can consider a number of topics for all three cases: explanation planning, context modeling, task modeling, user modeling, cognitive load modeling, attention modeling, relevance modeling and adaptive explanation.
This can be done using a category graph based programming language where recursiveness is embedded in the syntax structure and where at the bottom of the parsing tree calls to context specific programming languages are made to recursively determined context specific components.
Another topic important to XAI is that some components are trained on data, that the behavior of some components, simple or composite, is dependent upon training data, training procedures or experiences in environments. Brainstorming, we can consider that components or systems can produce data, e.g. event logs, when training or forming experiences in environments, such that the produced data can be of use to generating explanations and arguments for artificial intelligence systems’ behaviors or outputs. Pertinent topics include contextual summarization and narrative.
Context can be made explicit by assigning categories.
XAI topics are interesting; I’m enjoying the discussion. I hope that these theoretical topics can be of some use to developing new standards.
Milton Ponson
GSM: +297 747 8280
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean
Project Paradigm: Bringing the ICT tools for sustainable development to all stakeholders worldwide through collaborative research on applied mathematics, advanced modeling, software and standards development
Paola Di Maio,
When considering explanations of artificial intelligence systems’ behaviors or outputs or when considering arguments that artificial intelligence systems’ behaviors or outputs are correct or the best possible, we can consider diagrammatic, recursive, component-based approaches to the design and representation of models and systems (e.g. Lobe). For such approaches, we can consider simple components, interconnections between components, and composite components which are comprised of interconnected subcomponents. For such approaches, we can also consider that components can have settings, that components can be configurable.
As we consider recursive representations, a question is which level of abstraction should one use when generating an explanation – when composite components can be double-clicked upon to reveal yet more interconnected components? Which composite components should one utilize in an explanation or argument and which should be zoomed in upon and to which level of detail? We can generalize with respect to generating explanations and arguments from recursive models of: (1) mathematical proofs, (2) computer programs, and (3) component-based systems. We can consider a number of topics for all three cases: explanation planning, context modeling, task modeling, user modeling, cognitive load modeling, attention modeling, relevance modeling and adaptive explanation.
Another topic important to XAI is that some components are trained on data, that the behavior of some components, simple or composite, is dependent upon training data, training procedures or experiences in environments. Brainstorming, we can consider that components or systems can produce data, e.g. event logs, when training or forming experiences in environments, such that the produced data can be of use to generating explanations and arguments for artificial intelligence systems’ behaviors or outputs. Pertinent topics include contextual summarization and narrative.
XAI topics are interesting; I’m enjoying the discussion. I hope that these theoretical topics can be of some use to developing new standards.
Best regards,
Adam
Schiller, Marvin, and Christoph Benzmüller. "Presenting proofs with adapted granularity." In Annual Conference on Artificial Intelligence, pp. 289-297. Springer, Berlin, Heidelberg, 2009.
Cheong, Yun-Gyung, and Robert Michael Young. "A Framework for Summarizing Game Experiences as Narratives." In AIIDE, pp. 106-108. 2006.
From: Paola Di Maio
Sent: Monday, November 12, 2018 9:59 PM
Subject: Re: Toward a web standard for XAI?
Dear Adam
thanks and sorry for taking time to reply.
Indeed triggered some thinking
In the process of doing so,irealised whatever we come up with has to match the web stack, and then realised that we do not have a stack for the distributed web yet, as such
Is this what you are thinking, Adam Sobieski, please share more
sounds like in the right direction
PDM
Artificial intelligence and machine learning systems could produce explanation and/or argumentation [1].
Deep learning models can be assembled by interconnecting components [2][3]. Sets of interconnected components can become interconnectable composite components. XAI [4] approaches should work for deep learning models assembled by interconnecting components. We can envision explanations and arguments, or generators for such, forming as deep learning models are assembled from components.
What do you think about XAI and deep learning models assembled by interconnecting components?
Best regards,
Adam Sobieski
http://www.phoster.com/contents/
[1] https://www.w3.org/community/argumentation/
[2] https://www.lobe.ai/
[3] http://youtu.be/IN69suHxS8w
[4] https://www.darpa.mil/program/explainable-artificial-intelligence
From: Paola Di Maio
Sent: Wednesday, October 31, 2018 9:31 AM
Subject: Toward a web standard for XAI?
Just wondering
https://www.w3.org/community/aikr/2018/10/31/towards-a-web-standard-for-explainable-ai/
Paola Di Maio
2018-11-23 03:52:58 UTC
Permalink
by means of a working example

there is a known/accepted gap between territory and the map
the assumption that the map corresponds to the territory, can result
in fatal error

1. i d like to see how category theory resolves this particular dichotomy

2. if all intelligent systems humanity depends on, rely on such
common wrong assumptions,
that's where critical failures occur

3, thats where the work needs to be done, imho
Post by Paola Di Maio
I suppose that category theory , with its advantages and liimitations,
could well correspond to limitation of human and machine intelligence,
as long as we are aware of its limitations and possible distortions in
reasoning,
that could lead to distortions in decision making, are addressed by
complementary approaches
Post by ProjectParadigm-ICT-Program
I too was skeptical initially, and yes there may be some flaws in category theory, but convergence of sting theory, quantum theory, computability issues, software engineering, generalized frameworks for discussing all forms of logic and underlying calculi lead to one common ground: the use of category theory.
any pointers? if category theory and its flaws in knowledge representation (the gap beetween the reality and the abstraction ?) is all we have, then I am not surprising everyone is using it
Even Cognitive sciences and Biologically Inspired Cognitive Architectures draw heavily from category theory.
that's where the limitations of human and machien cognition come from?
Post by ProjectParadigm-ICT-Program
The current state of category theory has not yet considered unifying many fields, but string theorists and quantum physicists are actually the ones asking the pertinent questions, the answers of which point to a common ground.
I am familiar, to some extent, with the common ground <g>
I d love to see a unifying category of everything, sure it will make us laugh
Post by ProjectParadigm-ICT-Program
Milton
I have been thinking
There have beeb (long) discussions on category theory before.
Of course it has merits and useful applications, but how to leverage
its power without falling prey of its known fallacies?
I dont know if there is any recent good enough reference work to
advance the science
But maybe this would be also a good opportunity to work on that, since
we are throwing the entire universe of discourse into
the cauldron
How to overcome ihe obvious flaws .of category theory (I think the
argument is well made in Lakoff';s Women Fire and other dangerous
things)
this blog post summarises some of the points
https://jeremykun.com/2013/04/16/categories-whats-the-point/
In fact, this should be my next life mission.
Dr Paola Di Maio
Artificial Intelligence Knowledge Representation
Special Issue, Systems MDPI
*Cfp accepting manuscripts
A bit about me
On Mon, Nov 19, 2018 at 1:36 AM ProjectParadigm-ICT-Program
When considering explanations of artificial intelligence systems’ behaviors or outputs or when considering arguments that artificial intelligence systems’ behaviors or outputs are correct or the best possible, we can consider diagrammatic, recursive, component-based approaches to the design and representation of models and systems (e.g. Lobe). For such approaches, we can consider simple components, interconnections between components, and composite components which are comprised of interconnected subcomponents. For such approaches, we can also consider that components can have settings, that components can be configurable.
This recursiveness in what are very obviously category representations can be formalized by higher-dimensional categories in category theory.
As we consider recursive representations, a question is which level of abstraction should one use when generating an explanation – when composite components can be double-clicked upon to reveal yet more interconnected components? Which composite components should one utilize in an explanation or argument and which should be zoomed in upon and to which level of detail? We can generalize with respect to generating explanations and arguments from recursive models of: (1) mathematical proofs, (2) computer programs, and (3) component-based systems. We can consider a number of topics for all three cases: explanation planning, context modeling, task modeling, user modeling, cognitive load modeling, attention modeling, relevance modeling and adaptive explanation.
This can be done using a category graph based programming language where recursiveness is embedded in the syntax structure and where at the bottom of the parsing tree calls to context specific programming languages are made to recursively determined context specific components.
Another topic important to XAI is that some components are trained on data, that the behavior of some components, simple or composite, is dependent upon training data, training procedures or experiences in environments. Brainstorming, we can consider that components or systems can produce data, e.g. event logs, when training or forming experiences in environments, such that the produced data can be of use to generating explanations and arguments for artificial intelligence systems’ behaviors or outputs. Pertinent topics include contextual summarization and narrative.
Context can be made explicit by assigning categories.
XAI topics are interesting; I’m enjoying the discussion. I hope that these theoretical topics can be of some use to developing new standards.
Milton Ponson
GSM: +297 747 8280
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean
Project Paradigm: Bringing the ICT tools for sustainable development to all stakeholders worldwide through collaborative research on applied mathematics, advanced modeling, software and standards development
Paola Di Maio,
When considering explanations of artificial intelligence systems’ behaviors or outputs or when considering arguments that artificial intelligence systems’ behaviors or outputs are correct or the best possible, we can consider diagrammatic, recursive, component-based approaches to the design and representation of models and systems (e.g. Lobe). For such approaches, we can consider simple components, interconnections between components, and composite components which are comprised of interconnected subcomponents. For such approaches, we can also consider that components can have settings, that components can be configurable.
As we consider recursive representations, a question is which level of abstraction should one use when generating an explanation – when composite components can be double-clicked upon to reveal yet more interconnected components? Which composite components should one utilize in an explanation or argument and which should be zoomed in upon and to which level of detail? We can generalize with respect to generating explanations and arguments from recursive models of: (1) mathematical proofs, (2) computer programs, and (3) component-based systems. We can consider a number of topics for all three cases: explanation planning, context modeling, task modeling, user modeling, cognitive load modeling, attention modeling, relevance modeling and adaptive explanation.
Another topic important to XAI is that some components are trained on data, that the behavior of some components, simple or composite, is dependent upon training data, training procedures or experiences in environments. Brainstorming, we can consider that components or systems can produce data, e.g. event logs, when training or forming experiences in environments, such that the produced data can be of use to generating explanations and arguments for artificial intelligence systems’ behaviors or outputs. Pertinent topics include contextual summarization and narrative.
XAI topics are interesting; I’m enjoying the discussion. I hope that these theoretical topics can be of some use to developing new standards.
Best regards,
Adam
Schiller, Marvin, and Christoph Benzmüller. "Presenting proofs with adapted granularity." In Annual Conference on Artificial Intelligence, pp. 289-297. Springer, Berlin, Heidelberg, 2009.
Cheong, Yun-Gyung, and Robert Michael Young. "A Framework for Summarizing Game Experiences as Narratives." In AIIDE, pp. 106-108. 2006.
From: Paola Di Maio
Sent: Monday, November 12, 2018 9:59 PM
Subject: Re: Toward a web standard for XAI?
Dear Adam
thanks and sorry for taking time to reply.
Indeed triggered some thinking
In the process of doing so,irealised whatever we come up with has to match the web stack, and then realised that we do not have a stack for the distributed web yet, as such
Is this what you are thinking, Adam Sobieski, please share more
sounds like in the right direction
PDM
Artificial intelligence and machine learning systems could produce explanation and/or argumentation [1].
Deep learning models can be assembled by interconnecting components [2][3]. Sets of interconnected components can become interconnectable composite components. XAI [4] approaches should work for deep learning models assembled by interconnecting components. We can envision explanations and arguments, or generators for such, forming as deep learning models are assembled from components.
What do you think about XAI and deep learning models assembled by interconnecting components?
Best regards,
Adam Sobieski
http://www.phoster.com/contents/
[1] https://www.w3.org/community/argumentation/
[2] https://www.lobe.ai/
[3] http://youtu.be/IN69suHxS8w
[4] https://www.darpa.mil/program/explainable-artificial-intelligence
From: Paola Di Maio
Sent: Wednesday, October 31, 2018 9:31 AM
Subject: Toward a web standard for XAI?
Just wondering
https://www.w3.org/community/aikr/2018/10/31/towards-a-web-standard-for-explainable-ai/
ProjectParadigm-ICT-Program
2018-11-23 18:56:28 UTC
Permalink
I know all too well about the difference between the territory and the map.
A grand unifying theory can never be built. Godel cum suis saw to that.
What we can strive for is any model consistent models of the territory as possible with a way to navigate between them and to know their interrelationship.
And our arrogance in assuming we can know the territory is proven wrong in quantum physics and in my humble opinion we should also heed the advice of Buddhist philosophy (Madhyamika Sautrantika Middle Way) which goes to great length in trying to explain why our senses play tricks on us in trying to capture  (The All Encompassing) Universe of Discourse and Limited Domains of Discourse.
Consistency is all we can and should care to look for in frameworks for formal modeling.
So far category theory provides the most promising tool for consistency in formal modeling across many converging fields.
Bridging the gap between map and territory will take much more than just category theory, but will be virtually impossible without it, unless we want AI that does not resemble human intelligence, or worse is ill equipped to deal with it.
And if we really want to get a better grip on AI and consequently KR we should shed the anthropocentric view of it and try to model intelligence with human intelligence as just one instance.
Milton PonsonGSM: +297 747 8280
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean
Project Paradigm: Bringing the ICT tools for sustainable development to all stakeholders worldwide through collaborative research on applied mathematics, advanced modeling, software and standards development

On Thursday, November 22, 2018 11:53 PM, Paola Di Maio <***@gmail.com> wrote:


by means of a working example

there is a known/accepted  gap between territory and the map
the assumption that the map corresponds to the territory, can result
in fatal error

1. i d like to see how category theory resolves this particular dichotomy

2.  if all intelligent systems humanity depends on, rely on such
common wrong assumptions,
that's where critical failures occur

3, thats where the work needs to be done, imho
Post by Paola Di Maio
I suppose that category theory , with its advantages and liimitations,
could well correspond to limitation of human and machine intelligence,
as long as we are aware of its limitations and possible distortions in
reasoning,
that could lead to distortions in decision making, are addressed by
complementary approaches
Post by ProjectParadigm-ICT-Program
I too was skeptical initially, and yes there may be some flaws in category theory, but convergence of sting theory, quantum theory, computability issues, software engineering, generalized frameworks for discussing all forms of logic and underlying calculi lead to one common ground: the use of category theory.
Post by ProjectParadigm-ICT-Program
any pointers?  if category theory and its flaws in knowledge representation (the gap beetween the reality and the abstraction ?) is all we have, then I am not surprising everyone is using it
Even Cognitive sciences and Biologically Inspired Cognitive Architectures draw heavily from category theory.
that's where the limitations of human and machien cognition come from?
Post by ProjectParadigm-ICT-Program
The current state of category theory has not yet considered unifying many fields, but string theorists and quantum physicists are actually the ones asking the pertinent questions, the answers of which point to a common ground.
I am familiar, to some extent, with the common ground <g>
I d love to see a unifying category of everything, sure it will make us laugh
Post by ProjectParadigm-ICT-Program
Milton
I have been thinking
There have beeb (long) discussions on category theory before.
Of course it has merits and useful applications, but how to leverage
its power without falling prey of its known fallacies?
I dont know if there is any recent good enough reference work to
advance the science
But maybe this would be also a good opportunity to work on that, since
we are throwing the entire universe of discourse into
the cauldron
How to overcome ihe obvious flaws .of category theory (I think the
argument is well made in Lakoff';s Women Fire and other dangerous
things)
this blog post summarises some of the points
https://jeremykun.com/2013/04/16/categories-whats-the-point/
In fact, this should be my next life mission.
Dr Paola Di Maio
Artificial Intelligence Knowledge Representation
Special Issue, Systems MDPI
*Cfp  accepting manuscripts
A bit about me
On Mon, Nov 19, 2018 at 1:36 AM ProjectParadigm-ICT-Program
Post by ProjectParadigm-ICT-Program
When considering explanations of artificial intelligence systems’ behaviors or outputs or when considering arguments that artificial intelligence systems’ behaviors or outputs are correct or the best possible, we can consider diagrammatic, recursive, component-based approaches to the design and representation of models and systems (e.g. Lobe). For such approaches, we can consider simple components, interconnections between components, and composite components which are comprised of interconnected subcomponents. For such approaches, we can also consider that components can have settings, that components can be configurable.
This recursiveness in what are very obviously category representations can be formalized by higher-dimensional categories in category theory.
As we consider recursive representations, a question is which level of abstraction should one use when generating an explanation – when composite components can be double-clicked upon to reveal yet more interconnected components? Which composite components should one utilize in an explanation or argument and which should be zoomed in upon and to which level of detail? We can generalize with respect to generating explanations and arguments from recursive models of: (1) mathematical proofs, (2) computer programs, and (3) component-based systems. We can consider a number of topics for all three cases: explanation planning, context modeling, task modeling, user modeling, cognitive load modeling, attention modeling, relevance modeling and adaptive explanation.
This can be done using a category graph based programming language where recursiveness is embedded in the syntax structure and where at the bottom of the parsing tree calls to context specific programming languages are made to recursively determined context specific components.
Another topic important to XAI is that some components are trained on data, that the behavior of some components, simple or composite, is dependent upon training data, training procedures or experiences in environments. Brainstorming, we can consider that components or systems can produce data, e.g. event logs, when training or forming experiences in environments, such that the produced data can be of use to generating explanations and arguments for artificial intelligence systems’ behaviors or outputs. Pertinent topics include contextual summarization and narrative.
Context can be made explicit by assigning categories.
XAI topics are interesting; I’m enjoying the discussion. I hope that these theoretical topics can be of some use to developing new standards.
Milton Ponson
GSM: +297 747 8280
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean
Project Paradigm: Bringing the ICT tools for sustainable development to all stakeholders worldwide through collaborative research on applied mathematics, advanced modeling, software and standards development
Paola Di Maio,
When considering explanations of artificial intelligence systems’ behaviors or outputs or when considering arguments that artificial intelligence systems’ behaviors or outputs are correct or the best possible, we can consider diagrammatic, recursive, component-based approaches to the design and representation of models and systems (e.g. Lobe). For such approaches, we can consider simple components, interconnections between components, and composite components which are comprised of interconnected subcomponents. For such approaches, we can also consider that components can have settings, that components can be configurable.
As we consider recursive representations, a question is which level of abstraction should one use when generating an explanation – when composite components can be double-clicked upon to reveal yet more interconnected components? Which composite components should one utilize in an explanation or argument and which should be zoomed in upon and to which level of detail? We can generalize with respect to generating explanations and arguments from recursive models of: (1) mathematical proofs, (2) computer programs, and (3) component-based systems. We can consider a number of topics for all three cases: explanation planning, context modeling, task modeling, user modeling, cognitive load modeling, attention modeling, relevance modeling and adaptive explanation.
Another topic important to XAI is that some components are trained on data, that the behavior of some components, simple or composite, is dependent upon training data, training procedures or experiences in environments. Brainstorming, we can consider that components or systems can produce data, e.g. event logs, when training or forming experiences in environments, such that the produced data can be of use to generating explanations and arguments for artificial intelligence systems’ behaviors or outputs. Pertinent topics include contextual summarization and narrative.
XAI topics are interesting; I’m enjoying the discussion. I hope that these theoretical topics can be of some use to developing new standards.
Best regards,
Adam
Schiller, Marvin, and Christoph BenzmÃŒller. "Presenting proofs with adapted granularity." In Annual Conference on Artificial Intelligence, pp. 289-297. Springer, Berlin, Heidelberg, 2009.
Cheong, Yun-Gyung, and Robert Michael Young. "A Framework for Summarizing Game Experiences as Narratives." In AIIDE, pp. 106-108. 2006.
From: Paola Di Maio
Sent: Monday, November 12, 2018 9:59 PM
Subject: Re: Toward a web standard for XAI?
Dear Adam
thanks and sorry for taking time to reply.
Indeed triggered some thinking
In the process of doing so,irealised whatever we come up with has to match the web stack, and then realised that we do not have a stack for the distributed web yet, as such
Is this what you are thinking, Adam Sobieski, please share more
sounds like in the right direction
PDM
Artificial intelligence and machine learning systems could produce explanation and/or argumentation [1].
Deep learning models can be assembled by interconnecting components [2][3]. Sets of interconnected components can become interconnectable composite components. XAI [4] approaches should work for deep learning models assembled by interconnecting components. We can envision explanations and arguments, or generators for such, forming as deep learning models are assembled from components.
What do you think about XAI and deep learning models assembled by interconnecting components?
Best regards,
Adam Sobieski
http://www.phoster.com/contents/
[1] https://www.w3.org/community/argumentation/
[2] https://www.lobe.ai/
[3] http://youtu.be/IN69suHxS8w
[4] https://www.darpa.mil/program/explainable-artificial-intelligence
From: Paola Di Maio
Sent: Wednesday, October 31, 2018 9:31 AM
Subject: Toward a web standard for XAI?
Just wondering
https://www.w3.org/community/aikr/2018/10/31/towards-a-web-standard-for-explainable-ai/
ProjectParadigm-ICT-Program
2018-11-23 19:16:10 UTC
Permalink
Stephen Hawking and many advocates and founding fathers of AI warn against the hubris of humankind and its achievements in AI and warn against the current course of development of AI.
Only if we know to model AI from a non-anthropocentric point of view may we be able to build in safeguards to prevent an AI achieving a level where it deems the humans to be a nuisance, threat, expandable or the enemy.
I assume that the tacit, implicit assumption underlying all development in the areas of AI and KR is that it is put to these to create AI that is, for all practical purposes, there to serve humankind, not to be a threat, disobedient or worse to supplant it.
AI should be able to understand our formal reasoning, and be able to understand the challenges of scientists and engineers, but also read our minds, gauge our feelings and be able to anticipate our actions and reactions.
Category theory is instrumental in structuring the former including our limitations, but it will take other tools, yet to be developed to tackle with the other part of the spectrum. Milton Ponson
GSM: +297 747 8280
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean
Project Paradigm: Bringing the ICT tools for sustainable development to all stakeholders worldwide through collaborative research on applied mathematics, advanced modeling, software and standards development

On Friday, November 23, 2018 2:56 PM, ProjectParadigm-ICT-Program <***@yahoo.com> wrote:


I know all too well about the difference between the territory and the map.
A grand unifying theory can never be built. Godel cum suis saw to that.
What we can strive for is any model consistent models of the territory as possible with a way to navigate between them and to know their interrelationship.
And our arrogance in assuming we can know the territory is proven wrong in quantum physics and in my humble opinion we should also heed the advice of Buddhist philosophy (Madhyamika Sautrantika Middle Way) which goes to great length in trying to explain why our senses play tricks on us in trying to capture  (The All Encompassing) Universe of Discourse and Limited Domains of Discourse.
Consistency is all we can and should care to look for in frameworks for formal modeling.
So far category theory provides the most promising tool for consistency in formal modeling across many converging fields.
Bridging the gap between map and territory will take much more than just category theory, but will be virtually impossible without it, unless we want AI that does not resemble human intelligence, or worse is ill equipped to deal with it.
And if we really want to get a better grip on AI and consequently KR we should shed the anthropocentric view of it and try to model intelligence with human intelligence as just one instance.
Milton PonsonGSM: +297 747 8280
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean
Project Paradigm: Bringing the ICT tools for sustainable development to all stakeholders worldwide through collaborative research on applied mathematics, advanced modeling, software and standards development

On Thursday, November 22, 2018 11:53 PM, Paola Di Maio <***@gmail.com> wrote:


by means of a working example

there is a known/accepted  gap between territory and the map
the assumption that the map corresponds to the territory, can result
in fatal error

1. i d like to see how category theory resolves this particular dichotomy

2.  if all intelligent systems humanity depends on, rely on such
common wrong assumptions,
that's where critical failures occur

3, thats where the work needs to be done, imho
Post by Paola Di Maio
I suppose that category theory , with its advantages and liimitations,
could well correspond to limitation of human and machine intelligence,
as long as we are aware of its limitations and possible distortions in
reasoning,
that could lead to distortions in decision making, are addressed by
complementary approaches
Post by ProjectParadigm-ICT-Program
I too was skeptical initially, and yes there may be some flaws in category theory, but convergence of sting theory, quantum theory, computability issues, software engineering, generalized frameworks for discussing all forms of logic and underlying calculi lead to one common ground: the use of category theory.
Post by ProjectParadigm-ICT-Program
any pointers?  if category theory and its flaws in knowledge representation (the gap beetween the reality and the abstraction ?) is all we have, then I am not surprising everyone is using it
Even Cognitive sciences and Biologically Inspired Cognitive Architectures draw heavily from category theory.
that's where the limitations of human and machien cognition come from?
Post by ProjectParadigm-ICT-Program
The current state of category theory has not yet considered unifying many fields, but string theorists and quantum physicists are actually the ones asking the pertinent questions, the answers of which point to a common ground.
I am familiar, to some extent, with the common ground <g>
I d love to see a unifying category of everything, sure it will make us laugh
Post by ProjectParadigm-ICT-Program
Milton
I have been thinking
There have beeb (long) discussions on category theory before.
Of course it has merits and useful applications, but how to leverage
its power without falling prey of its known fallacies?
I dont know if there is any recent good enough reference work to
advance the science
But maybe this would be also a good opportunity to work on that, since
we are throwing the entire universe of discourse into
the cauldron
How to overcome ihe obvious flaws .of category theory (I think the
argument is well made in Lakoff';s Women Fire and other dangerous
things)
this blog post summarises some of the points
https://jeremykun.com/2013/04/16/categories-whats-the-point/
In fact, this should be my next life mission.
Dr Paola Di Maio
Artificial Intelligence Knowledge Representation
Special Issue, Systems MDPI
*Cfp  accepting manuscripts
A bit about me
On Mon, Nov 19, 2018 at 1:36 AM ProjectParadigm-ICT-Program
Post by ProjectParadigm-ICT-Program
When considering explanations of artificial intelligence systems’ behaviors or outputs or when considering arguments that artificial intelligence systems’ behaviors or outputs are correct or the best possible, we can consider diagrammatic, recursive, component-based approaches to the design and representation of models and systems (e.g. Lobe). For such approaches, we can consider simple components, interconnections between components, and composite components which are comprised of interconnected subcomponents. For such approaches, we can also consider that components can have settings, that components can be configurable.
This recursiveness in what are very obviously category representations can be formalized by higher-dimensional categories in category theory.
As we consider recursive representations, a question is which level of abstraction should one use when generating an explanation – when composite components can be double-clicked upon to reveal yet more interconnected components? Which composite components should one utilize in an explanation or argument and which should be zoomed in upon and to which level of detail? We can generalize with respect to generating explanations and arguments from recursive models of: (1) mathematical proofs, (2) computer programs, and (3) component-based systems. We can consider a number of topics for all three cases: explanation planning, context modeling, task modeling, user modeling, cognitive load modeling, attention modeling, relevance modeling and adaptive explanation.
This can be done using a category graph based programming language where recursiveness is embedded in the syntax structure and where at the bottom of the parsing tree calls to context specific programming languages are made to recursively determined context specific components.
Another topic important to XAI is that some components are trained on data, that the behavior of some components, simple or composite, is dependent upon training data, training procedures or experiences in environments. Brainstorming, we can consider that components or systems can produce data, e.g. event logs, when training or forming experiences in environments, such that the produced data can be of use to generating explanations and arguments for artificial intelligence systems’ behaviors or outputs. Pertinent topics include contextual summarization and narrative.
Context can be made explicit by assigning categories.
XAI topics are interesting; I’m enjoying the discussion. I hope that these theoretical topics can be of some use to developing new standards.
Milton Ponson
GSM: +297 747 8280
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean
Project Paradigm: Bringing the ICT tools for sustainable development to all stakeholders worldwide through collaborative research on applied mathematics, advanced modeling, software and standards development
Paola Di Maio,
When considering explanations of artificial intelligence systems’ behaviors or outputs or when considering arguments that artificial intelligence systems’ behaviors or outputs are correct or the best possible, we can consider diagrammatic, recursive, component-based approaches to the design and representation of models and systems (e.g. Lobe). For such approaches, we can consider simple components, interconnections between components, and composite components which are comprised of interconnected subcomponents. For such approaches, we can also consider that components can have settings, that components can be configurable.
As we consider recursive representations, a question is which level of abstraction should one use when generating an explanation – when composite components can be double-clicked upon to reveal yet more interconnected components? Which composite components should one utilize in an explanation or argument and which should be zoomed in upon and to which level of detail? We can generalize with respect to generating explanations and arguments from recursive models of: (1) mathematical proofs, (2) computer programs, and (3) component-based systems. We can consider a number of topics for all three cases: explanation planning, context modeling, task modeling, user modeling, cognitive load modeling, attention modeling, relevance modeling and adaptive explanation.
Another topic important to XAI is that some components are trained on data, that the behavior of some components, simple or composite, is dependent upon training data, training procedures or experiences in environments. Brainstorming, we can consider that components or systems can produce data, e.g. event logs, when training or forming experiences in environments, such that the produced data can be of use to generating explanations and arguments for artificial intelligence systems’ behaviors or outputs. Pertinent topics include contextual summarization and narrative.
XAI topics are interesting; I’m enjoying the discussion. I hope that these theoretical topics can be of some use to developing new standards.
Best regards,
Adam
Schiller, Marvin, and Christoph BenzmÃŒller. "Presenting proofs with adapted granularity." In Annual Conference on Artificial Intelligence, pp. 289-297. Springer, Berlin, Heidelberg, 2009.
Cheong, Yun-Gyung, and Robert Michael Young. "A Framework for Summarizing Game Experiences as Narratives." In AIIDE, pp. 106-108. 2006.
From: Paola Di Maio
Sent: Monday, November 12, 2018 9:59 PM
Subject: Re: Toward a web standard for XAI?
Dear Adam
thanks and sorry for taking time to reply.
Indeed triggered some thinking
In the process of doing so,irealised whatever we come up with has to match the web stack, and then realised that we do not have a stack for the distributed web yet, as such
Is this what you are thinking, Adam Sobieski, please share more
sounds like in the right direction
PDM
Artificial intelligence and machine learning systems could produce explanation and/or argumentation [1].
Deep learning models can be assembled by interconnecting components [2][3]. Sets of interconnected components can become interconnectable composite components. XAI [4] approaches should work for deep learning models assembled by interconnecting components. We can envision explanations and arguments, or generators for such, forming as deep learning models are assembled from components.
What do you think about XAI and deep learning models assembled by interconnecting components?
Best regards,
Adam Sobieski
http://www.phoster.com/contents/
[1] https://www.w3.org/community/argumentation/
[2] https://www.lobe.ai/
[3] http://youtu.be/IN69suHxS8w
[4] https://www.darpa.mil/program/explainable-artificial-intelligence
From: Paola Di Maio
Sent: Wednesday, October 31, 2018 9:31 AM
Subject: Toward a web standard for XAI?
Just wondering
https://www.w3.org/community/aikr/2018/10/31/towards-a-web-standard-for-explainable-ai/
Loading...