According to the chapter called “general knowledge,” an important point is that this knowledge

Cognitive Research: Knowledge Representation and Learning

Vincent J. KovarikJr., in Cognitive Radio Technology (Second Edition), 2009

12.2.2 Ontologies and Frame Systems

Declarative knowledge is of little value when it is simply a disjoint collection of facts and assertions. The data must be organized into a useful form in order to support reasoning and learning. This organization is typically referred to as an ontology. Ontology-based knowledge representation and reasoning for software-defined radios (SDRs) have been investigated by a number of individuals, including Wang et al. [2, 3], and in this text by Kokar et al. [4].

The common is-a-kind-of relationship organizes classes or types of entities into a type–subtype hierarchy similar to object-oriented programming languages. The differences between the type and subtype can be structural or behavioral. For example, a pager and a cell phone may be both classified as a-kind-of personal communications device.

Ontology-based representation can be traced back to early frame systems [5], and both have several aspects or viewpoints that can be applied to the organization of a corpus of knowledge into an ontology. The Web Ontology Language (OWL) of Bechhofer et al. [6] has been used by a number of individuals to represent declarative knowledge about CR structures and relationships. For example, Baclawski et al. [7] explore the use of ontology-based reasoning for communications protocol interoperability. Figure 12.6 illustrates a possible ontology of communications devices according to a type hierarchy. Two common types of devices—a cell phone and a pager—are described within this contextual organization.

Figure 12.6. Communications device ontology.

The value of this type of organization from a learning system perspective is that, as new concepts and situations are encountered, the existing knowledge ontology can be searched to identify an appropriate classification for the new situation or concept. For example, if a Family Radio Service (FRS) walkie-talkie is introduced into the system, it may be described as being used by an individual capable of either sending or receiving at any given point in time. Using this knowledge to guide the traversal of the knowledge hierarchy, the system would follow the hierarchical links based on the known attributes of the new entity and reach the conclusion that the cell phone radio system should be classified as a PersonalCommunicationsDevice and a FullDuplex device.

Learning within an ontology-based system is performed by incorporating and integrating new concepts into the existing set of entries. New concepts are incorporated into the ontology based on their similarity to existing entities; that is, new information is processed by a classifier. The classifier analyzes the properties and the values associated with the new concept and compares it against the existing set of concepts within the ontology. Wherever there is a similarity, the similar concept or concepts are candidates for associating the new concept into the ontology.

This is nearly the same as the method by which we incorporate new concepts within the framework of our knowledge and experience. Using this approach, new entities can be linked into an ontology, enabling the system to extend its set of knowledge. Note that, although the example shown describes physical entities, the ontology can also represent related actions, events, or abstract entities, such as waveforms, and the attributes associated with each of these types of entities. Thus, organizing new knowledge can be performed across a range of conceptual domains.

Also note, however, that learning based on ontology extensions is predicated on the existence of an initial set of entities already organized into a hierarchical network of concepts.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780123745354000126

Increasing and standardizing quality of care using computerized guidelines for clinical decision support

Fedor Lehocki, ... Marek Mydliar, in Personalized Health Systems for Cardiovascular Disease, 2022

7.3.4.3 Monitoring and alerting

The specification of the GL’s declarative knowledge, although integrated with its procedural knowledge, is represented conceptually within a closely associated KB and therefore can be monitored once it is connected to patient data.

An example for this task is a CDSS client application that constantly monitors and reasons about a pattern that defines the declarative knowledge eligibility criteria for a particular GL, such as “moderate anemia for two weeks” (an abstraction that is derived by using the domain’s declarative knowledge from a set of raw time-stamped hemoglobin values) and triggers an alert when the entry pattern is evaluated to a True value. This type of alerting is referred to as asynchronous, data-driven, or event-driven reasoning. Subscribers (typically, software modules) such as a DSS or even the patient’s mobile device can subscribe to the patient’s DB, given a suitable monitoring system, to get alerts. Another option to perform a monitoring task is through synchronous, query-driven (goal-driven) reasoning, which occurs when a specific client sends a request to the reasoning engine to calculate a given pattern (such as the one described above) and gets back the results of the evaluation

Fig. 7.6 displays the architecture of the IDAN temporal abstraction mediator (Boaz & Shahar, 2005). The mediator’s controller effectively mediates a complex, knowledge-based query involving several domain-specific clinical abstractions by retrieving the relevant medical knowledge and clinical time-stamped data using the knowledge-based temporal abstraction (KBTA) ontology (Shahar, 1997); the temporal abstraction service then applies the appropriate context-sensitive knowledge to the relevant clinical data. The IDAN architecture is capable of accessing heterogeneous EMRs as well as providing sophisticated services for querying a patient’s raw data and derived abstract concepts. Another important component of the IDAN architecture is the MEIDA system, which includes a vocabulary server and a standard term search engine (German, Leibowitz, & Shahar, 2009).

Figure 7.6. The architecture of the IDAN knowledge-based temporal abstraction mediator. A clinical user accesses a medical decision support system, such as for GL application or for intelligent exploration of patient data. The (abstract) query is answered by applying relevant temporal abstraction knowledge to appropriate time-oriented clinical data.

The use of standardized vocabularies and terms enables referral of queries by the GL runtime application system to the temporal abstraction mediator, such as the IDAN architecture, regardless of the terminology used in each local clinical database. Similarly, the MOMENTUM module is an active time-oriented database within the incremental temporal abstraction architecture for intelligent abstraction, exploration, and analysis of clinical data (Spokoiny & Shahar, 2007).

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128189504000045

Knowledge Based Modeling

J.G. Lenard, ... L. Cser, in Mathematical and Physical Simulation of the Properties of Hot Rolled Products, 1999

Why has 25% reduction been choosen?

However, no answer can be found, if the grain size is between 80 and 250 μm, say 150 μm.

The knowledge used above is a typical example of declarative knowledge representation. Declarative knowledge is stored as a set of statements about the phenomenon to be described. These statements are static but can be added to, deleted or modified. Note that there are some other types of knowledge representation, such as procedural, symbolic, subsymbolic, etc.

Rule-based systems separate the declarative knowledge from the code that controls the inference and search procedures. The declarative knowledge is stored in a knowledge base (KB) while the control knowledge is kept in a separate area called an inference engine. An inference engine is an algorithm that dynamically directs or controls the system when it searches its knowledge base (Harmon and Hall, 1993). The inference engine matches the domains of the premise part in the rules, stored in the knowledge base, with the action parts building up logical chains forward or backward (forward chaining, or backward chaining), depending on the task (Harmon and King, 1985, Buchanan et al., 1985).

Forward chaining is a method that finds every conclusion possible based on a given set of premises. A typical question to be answered using forward chaining is:

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780080427010500097

Mind model

Zhongzhi Shi, in Intelligence Science, 2021

4.4.2.3 Semantic memory

In addition to procedural knowledge, which is encoded as rules in Soar, there is declarative knowledge, which can be split into things that are known, such as facts, and things that are remembered, such as episodic experiences. Semantic learning and memory provides the ability to store and retrieve declarative facts about the world, such as tables have legs, dogs are animals, and Ann Arbor is in Michigan. This capability has been central to ACT-R’s ability to model a wide variety of human data, and adding it to Soar should enhance the ability to create agents that reason and use general knowledge about the world. In Soar, semantic memory is built up from structures that occur in working memory. A structure from semantic memory is retrieved by creating a cue in a special buffer in working memory. The cue is then used to search for the best partial match in semantic memory, which is then retrieved from working memory.

Because the knowledge is encoded in rules, retrieval requires an exact match of the cue, limiting the generality of what is learned. These factors made it difficult to use data chunking in new domains, begging the question as to how it would naturally arise in a generally intelligent agent.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B978032385380400004X

How Can Good Strategy Use Be Taught to Children? Evaluation of Six Alternative Approaches

MICHAEL PRESSLEY, ... TERESA CARIGLIA-BULL, in Transfer of Learning: Contemporary Research and Applications, 1987

E SUMMARY

To return to our fictional example, Fenton Hardy spent years teaching strategies and domain-specific declarative knowledge to Frank and Joe. Like most forms of expertise (e.g., Lesgold, 1984), competent strategy use takes a long time to develop. Thus, we believe that strategy instruction should occur across the school curriculum and in diverse aspects of the child's world. Both direct explanation and reciprocal instruction seem especially well suited to conveying strategies and knowledge about strategies. Direct explanation seems most appropriate for regular classrooms. Given the amount of attention required from the person who is teaching the child, reciprocal instruction seems especially suited to family settings. When small teacher-to-student ratios are possible in school, however, it can be employed there as well. It seems to be powerful with students who experience difficulty learning strategies in the regular classroom setting.

Despite our preference for direct explanation and reciprocal instruction, we realize that the alternative methods have roles in the program of pervasive, long-term teaching that we believe is necessary to develop good strategy users who are broadly knowledgeable both about cognitive processing and the world in general. In making the summary recommendation for extended and pervasive strategy instruction, we recognize that it is more an article of faith than one based on conclusive experimental data. Definitive judgments about long-term strategy instruction must be deferred until there are evaluations of extensive strategy instructional interventions that extend over several years. What if children were given 3 to 5 years of strategy teaching distributed across reading, math, social studies, science, and sports? How would their thinking differ from that of children who experienced the normal curriculum? Such an experiment is possible, and it seems that we are near a point when such an extensive study might be conducted profitably. Many strategies have been identified for many different tasks in many different domains (although most require additional validation), and the six approaches to instruction identified here are available (athough again, each requires additional study). If the graduates of a strategy-rich program really did look better on a variety of cognitive measures compared to children in the regular curriculum, the finding would do much to fuel additional study and implementation of the strategy instructional approach to education. We think it is time to dream big about this approach and to try to translate those big dreams into informative tests about the use of long-term strategy instruction to create generally better thinkers.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780121889500500102

PSYCHOLOGICAL FOUNDATIONS

David A. Rosenbaum, in Human Motor Control, 1991

Procedural and Declarative Knowledge

Perhaps the most fundamental distinction that has been drawn in the study of memory representations is between procedural and declarative knowledge (Squire, 1987). Insofar as motor control involves the physical enactment of procedures, it is worth considering this distinction in some detail. A way to do so is to review the case of a neurological patient known in the literature by his initials, H. M.

When H. M. was a young man he had severe epilepsy. To alleviate his seizures, he underwent a surgical procedure involving removal of the hippocampus, a structure in the forebrain. Soon after H. M.'s surgery, it became apparent that, as a result of the operation, he had suffered a significant memory impairment. H. M. could not learn new facts. For example, he could not learn new lists of words. When he was later asked to recall the lists or indicate whether he recognized them, his performance was no better than chance. H. M. was unable to recognize the people who visited him each day to test him on the word lists he had been taught. A convenient interpretation of H. M.'s difficulty was that he was unable to form new memories. According to this view, the regions of H. M.'s brain that were lesioned were the areas that allow for the transformation of information from a short-term, fragile state to a long-term, permanent state.

This interpretation came into question when it was found that H. M. could learn new perceptual-motor skills. For example, he could learn mirror tracing—a task in which one traces a complex path while viewing the path through a mirror. The task is hard at first, but with practice gets better. H. M. improved at the normal rate, yet each day when the task was presented to him, he denied ever having seen it before!

This strange phenomenon suggests that the human brain is organized in such a way that it stores different kinds of information in different ways (or perhaps in different locations). It is as if there is a distinction between “knowing how” and “knowing that,” or what has been called procedural and declarative knowledge (Squire, 1987). Declarative knowledge consists of facts that can be stated verbally, such as propositions about persons, places, things, and events. An example is the proposition “Christopher Columbus discovered America in 1492.” Procedural knowledge consists of instructions for the performance of series of operations. As often as not, procedural knowledge is difficult or even impossible to verbalize. An example is the knowledge one has for riding a bicycle.

The fact that information is coded in procedural form does not mean it can never be articulated. Otherwise, it would be a vain hope for researchers concerned with skill to think they could ever articulate the nature of skill knowledge. Physicists know, for example, that the rule for riding a bicycle is to turn the handlebars so the curvature of the bike's trajectory is proportional to the angle of its imbalance divided by the square of its speed (Polanyi, 1964). Most bicyclists do not know this proposition, stated as such. However, at some level, the information summarized in the proposition is embodied in the neural networks that allow cyclists to stay erect while cycling.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780080571089500081

Psychological Foundations

David A. Rosenbaum, in Human Motor Control (Second Edition), 2010

Procedural and Declarative Knowledge

Perhaps the most fundamental distinction that has been drawn in the study of forms of memory representation is between procedural and declarative knowledge (Squire, 1987). Because motor control involves the physical enactment of procedures, it is worth considering this distinction in some detail. A way to do so is to review the case of a neurological patient known in the literature by his initials, H.M.

When H.M. was a young man he had severe epilepsy. To alleviate his seizures, he underwent a surgical procedure involving bilateral removal of the hippocampus, a structure in the forebrain. Soon after H.M.’s surgery, which involved removal of his left and right hippocampus it became apparent that as a result of the operation, he suffered a significant memory impairment. He could not learn new lists of words. When later asked to recall the lists or indicate whether he recognized them, his performance was no better than chance. H.M. was also unable to recognize people who visited him each day to test him on these word lists.

A first interpretation of H.M.’s difficulty was that he could not form new memories. According to this view, the regions of H.M.’s brain that were lesioned were the areas that allow for the transformation of information from a short-term fragile form to a long-term permanent form. This interpretation came into question, however, when it was found that H.M. could learn new perceptual-motor skills (Scoville & Milner, 1957). For example, he could learn mirror tracing, where the task is to trace a complex path while viewing the path through a mirror (Figure 4.14). The task is hard at first, but with practice gets easier. H.M. improved on mirror tracing at the same rate as normal individuals, yet each day when the task was presented to him, he denied ever having seen it before.

FIGURE 4.14. Mirror tracing task (A) and improvement on mirror tracing task as a function of number of attempts on the first, second, and third days of testing in patient H.M. (B). While the hand is tracing the shape (a star in this instance) it can only be seen in the mirror. Errors are made when the pencil goes outside the shape’s borders.

From Smith and Kosslyn (2008), p. 200. With permission.

This striking outcome suggests that the human brain is organized in such a way that it stores different kinds of information in different ways (or perhaps in different locations). The brain respects the distinction between “knowing how” and “knowing that,” or, to use the terms with which we began this section, procedural and declarative knowledge (Squire, 1987). Procedural knowledge consists of implicit instructions for the performance of a series of operations, for example, riding a bicycle. As often as not, it is difficult or even impossible to verbalize procedural knowledge. Declarative knowledge consists of facts that can be stated verbally, such as propositions about persons, places, things, and events—for example, “Christopher Columbus discovered America in 1492.”

The fact that information is coded in procedural form does not mean it can never be articulated. Otherwise, it would be a vain hope for researchers concerned with skill to think they could ever articulate the nature of skill knowledge. Physicists know, for example, that the rule for riding a bicycle is to turn the handle bars so the curvature of the bike’s trajectory is proportional to the angle of its imbalance divided by the square of its speed, as mentioned in Chapter 1 (Polanyi, 1964). Most bicyclists cannot state this proposition spontaneously when asked how they manage to ride their bikes. However, at some level, the information summarized in the proposition is embodied in the neural networks that allow cyclists to ride as they do.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780123742261000048

Artificial Intelligence and Expert Systems

In Lees' Loss Prevention in the Process Industries (Fourth Edition), 2012

30.1.1 Knowledge

Knowledge is of many kinds and this is reflected in the forms of knowledge representation used.

One distinction commonly made is between data and information. Other distinctions made about knowledge are domain-independent vs. domain-specific, exact vs. fuzzy, and procedural vs. declarative knowledge. Some types of knowledge include facts; models; distinctions; relationships; constraints; procedures and plans; and rules-of-thumb, or heuristics.

The extent of the knowledge required varies greatly between problems. In certain limited worlds, such as the ‘blocks world’ extensively studied in AI, the world knowledge required is quite limited. In real life, or mundane, situations on the other hand, the human comes to the problem possessed of a massive store of knowledge. He draws from this information not only about facts but also about other aspects such as constraints.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780123971890000306

Knowledge Elicitation and Representation

DEBORAH A. BOEHM-DAVIS, in Human Performance Models for Computer-Aided Engineering, 1990

KNOWLEDGE ELICITATION

Knowledge elicitation is the term used to refer to any of the methods employed to gather data regarding what information people have about a particular system. This process generally elicits both procedural and declarative knowledge; furthermore, the knowledge is elicited from a person or people who are defined as expert in the domain being studied. Thus, the type of information elicited is typically about how the system works, what the system components are, how they are related, what the internal processes of the system are, and how they affect the system components from an expert's point of view.

The techniques for eliciting this knowledge include both direct and indirect methods. The direct procedures, as their name indicates, involve asking experts to report directly their experiences in using a system. This can be done through interviews, questionnaires, or verbal protocols. For each of these techniques, the responses of the subject-matter (or domain) expert form the knowledge base of that domain.

In interviews and questionnaires, experts may be asked merely to describe their interactions with the system, or they may be asked structured questions such as cause and effect queries. Using verbal protocol techniques (see Learning Research and Development Center, 1985, for a guide to performing cognitive task analyses), the subject-matter expert “talks aloud” while either solving typical tasks or running through simulations designed to tap a variety of circumstances likely to be encountered in using the system. These protocols are then analyzed by the researcher, or knowledge engineer, and the data are translated into knowledge structures that capture the observed information-gathering and decision-making strategies.

Indirect techniques include traditional experiments, simulations, and observational studies that capture and analyze patterns of responses, such as errors or pauses. In traditional psychological experiments, the effects of different manipulations are used to infer the underlying cognitive structure. In simulation studies, simulations of the system are developed, and results of the simulation runs are then compared with what people do in using the actual system. In observational studies, the responses, errors, or pauses made by the users are collected and analyzed for consistent patterns. Many of these techniques rely heavily on statistical analyses, such as scaling, path analysis, and ordered trees, to discover the structure of the information in the domain (for example see, Reitman and Reuter, 1980; Schvaneveldt, Durso, Goldsmith, Breen, Cooke, Tucker, and DeMaio, 1985).

Recent research in this area has focused on building automated (or semiautomated) tools for acquiring this expertise (see, for example, the four-part series of special issues on knowledge acquisition for knowledge-based systems edited by Boose and Gaines, 1987).

Many of these tools, however, suffer from problems common to all the knowledge elicitation techniques discussed and to the whole approach for eliciting knowledge and representing it in a knowledge base.

Fischhoff outlined some of these concerns in a report by the National Research Council (1983). First, he points out the necessity of ensuring a common frame of reference between the researcher collecting the data and the subject-matter expert. Second, he notes the need to match the questions asked of the domain experts to their mental structures. Specifically, he stresses the assumption of most techniques that experts can answer any questions asked. Researchers, therefore, do not consider the possibility of getting misleading data. This may arise either because experts do not want to admit how they actually accomplish their tasks or because the specific question asked falls outside the particular person's expertise. Finally, he points out that the quality of the information elicited must be clarified, in terms both of how complete and accurate the expert's knowledge is and of how biased the reports are.

This raises the question of how to validate the knowledge gleaned from an elicitation procedure. Researchers have questioned the impact of reporting biases on the part of the expert (see, for example, Cleaves, 1987); the veridity of retrospections used by experts in developing answers to the questions posed, and the impact of the technique itself on the type of knowledge elicited and the organization of that information. Tied to this is the problem of knowing what an appropriate level of abstraction is in representing the knowledge collected from a subject-matter expert. These considerations make it difficult to determine whether the “correct” information has been elicited for any given system.

Another problem arises from the conceptions of the nature of novice-expert differences. All the techniques discussed so far are aimed at eliciting expert knowledge from experts. These techniques have buried in them the assumption that the differences between novices and experts are quantitative, not qualitative. In other words, the assumption is that what makes a person a novice is that he or she has not yet acquired as much information as the expert.

This assumption is not universally accepted. Rasmussen (1986) has suggested that the differences between novices and experts are qualitative, with expert models coming closer to what is true in the world. If this is the case, the emphasis on eliciting all the contents of a user's mental model may be misplaced. Rather, one should concentrate on what the triggering conditions are for an expert to recognize (or diagnose) a particular situation. This would suggest that models as sophisticated as the ones described may not be necessary; rather, it may be preferable to get a first cut at people's understanding of the systems they use, which could be done with small, quick investigations. Some insights into a person's expertise could also be obtained by calibrating their general ability to use the information contained in a tool, rather than by trying to elicit all of their knowledge about a system.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780122365300500261

Hypergraph-based type theory for software development in a Cyber-Physical context

Nathaniel Christen, in Advances in Ubiquitous Computing, 2020

3.3.3 Channelized hypergraphs and RDF

The Resource Description Framework (RDF) models information via directed graphs (Refs. [88–91] are good discussions of Semantic Web technologies from a graph-theoretic perspective), whose edges are labeled with concepts that, in well-structured contexts, are drawn from published Ontologies (these labels play a similar role to “classifiers” in CHs). In principle, all data expressed via RDF graphs is defined by unordered sets of labeled edges, also called “triples” (“〈Subject, Predicate, Object〉,” where the “Predicate” is the label). In practice, however, higher-level RDF notations such as TTL (Turtle or “Terse RDF Triple Language”) and Notation3 (N3) deal with aggregate groups of data, such as RDF containers and collections.

For example, imagine a representation of the fact “(A/The person named) Nathaniel, 46, has lived in Brooklyn, Buffalo, and Montreal” (shown in Fig. 3.2 as both a CH and in RDF). If we consider Turtle or N3 as languages and not just notations, it would appear as if their semantics is built around hyperedges rather than triples. It would seem that these languages encode many-to-many or one-to-many assertions, graphed as edges having more than one subject and/or predicate. Indeed, Tim Berners-Lee himself suggests that “Implementations may treat list as a data type rather than just a ladder of rdf:first and rdf:rest properties” [92, p. 6]. That is, the specification for RDF list-type data structures invites us to consider that they may be regarded integral units rather than just aggregates that get pulled apart in semantic interpretation.

Fig. 3.2. CH versus RDF collections.

Technically, perhaps, this is an illusion. Despite their higher-level expressiveness, RDF expression languages are, perhaps, supposed to be deemed “syntactic sugar” for a more primitive listing of triples: the semantics of Turtle and N3 are conceived to be defined by translating expressions down to the triple sets that they logically imply (see also [93]). This intention accepts the paradigm that providing semantics for a formal language is closely related to defining which propositions are logically entailed by its statements.

There is, however, a divergent tradition in formal semantics that is oriented to type theory more than logic. It is consistent with this alternative approach to see a different semantics for a language like Turtle, where larger-scale aggregates become “first class” values. So, 〈⌈Nathaniel⌉, ⌈46⌉〉 can be seen as a (single, integral) value whose type is a name, age pair. Such a value has an “internal structure” which subsumes multiple data points. The RDF version is organized, instead, around a blank node which ties together disparate data points, such as my name and my age. This blank node is also connected to another blank node which ties together place and party. The blank nodes play an organizational role, since nodes are grouped together insofar as they connect to the same blank node. But the implied organization is less strictly entailed; one might assume that the 〈⌈Brooklyn⌉, ⌈Democrat⌉〉 nodes could just as readily be attached individually to the “name/age” blank (i.e., I live in Brooklyn, and I vote Democratic).

Why, that is, are Brooklyn and Democratic grouped together? What concept does this fusion model? There is a presumptive rationale for the name/age blank (i.e., the fusing name/age by joining them to a blank node rather than allowing them to take edges independently): conceivably there are multiple 46-year olds named Nathaniel, so that blank node plays a key semantic role (analogous to the quantifier in “There is a Nathaniel, age 46…”); it provides an unambiguous nexus so that further predicates can be attached to one specific 46-year-old Nathaniel rather than any old 〈⌈Nathaniel⌉, ⌈46⌉〉. But there is no similarly suggested semantic role for the “place/party” grouping. The name cannot logically be teased apart from the name/age blank (because there are multiple Nathaniels), but there seems to be no logical significance to the place/party grouping. Yet pairing these values can be motivated by a modeling convention—reflecting that geographic and party affiliation data are grouped together in a dataset or data model. The logical semantics of RDF make it harder to express these kinds of modeling assumptions that are driven by convention more than logic—an abstracting from data's modeling environment that can be desirable in some contexts but not in others.

So, why does the Semantic Web community effectively insist on a semantic interpretation of Turtle and N3 as just a notational convenience for N-Triples rather than as higher-level languages with a different higher-level semantics—and despite statements like the earlier Tim Berners-Lee quote insinuating that an alternative interpretation has been contemplated even by those at the heart of Semantic Web specifications? Moreover, defining hierarchies of material composition or structural organization—and so by extension, potentially, distinct scales of modeling resolution—has been identified as an intrinsic part of domain-specific Ontology design (see Refs. [94–101], or Ref. [102]). Semantic Web advocates have not, however, promoted multitier structure as a feature of Semantic models fundamentally, as opposed to criteriology within specific Ontologies. To the degree that this has an explanation, it probably has something to do with reasoning engines: the tools that evaluate SPARQL queries operate on a triplestore basis. So the “reductive” semantic interpretation is arguably justified via a warrant that the definitive criteria for Semantic Web representations are not their conceptual elegance vis-à-vis human judgments but their utility in cross-ontology and cross-context inferences.

As a counter-argument, however, note that many inference engines in Constraint Solving, Computer Vision, and so forth, rely on specialized algorithms and cannot be reduced to a canonical query format. Libraries such as GeCODE and ITK are important because problem solving in many domains demands fine-tuned application-level engineering. We can think of these libraries as supporting special or domain-specific reasoning engines, often built for specific projects, whereas OWL-based reasoners like Fact++ are general engines that work on general-purpose RDF data without further qualification. In order to apply “special” reasoners to RDF, a contingent of nodes must be selected that is consistent with reasoners’ runtime requirements.

Of course, special reasoners cannot be expected to run on the domain of the entire Semantic Web, or even on “very large” datasets in general. A typical analysis will subdivide its problem into smaller parts that are each tractable to custom reasoners—in radiology, say, a diagnosis may proceed by first selecting a medical image series and then performing image-by-image segmentation. Applied to RDF, this two-step process can be considered a combination of general and special reasoners: a general language like SPARQL filters many nodes down to a smaller subset, which are then mapped/deserialized to domain-specific representations (including runtime memory). For example, RDF can link a patient to a diagnostic test, ordered on a particular date by a particular doctor, whose results can be obtained as a suite of images—thereby selecting the particular series relevant for a diagnostic task. General reasoners can find the images of interest and then pass them to special reasoners (such as segmentation algorithms) to analyze. Insofar as this architecture is in effect, Semantic Web data are a site for many kinds of reasoning engines. Some of these engines need to operate by transforming RDF data and resources to an optimized, internal representation. Moreover, the semantics of these representations will typically be closer to a high-level N3 semantics taken as sui generis, rather than as interpreted reductively as a notational convenience for lower-level formats like N-Triple. This appears to undermine the justification for reductive semantics in terms of OWL reasoners.

Perhaps the most accurate paradigm is that Semantic Web data have two different interpretations, differing in being consistent with special and general semantics, respectively. It makes sense to label these the “special semantic interpretation” or “semantic interpretation for special-purpose reasoners” (SSI, maybe) and the “general semantic interpretation” (GSI), respectively. Both these interpretations should be deemed to have a role in the “semantics” of the Semantic Web.

Another order of considerations involve the semantics of RDF nodes and CH hypernodes particularly with respect to uniqueness. Nodes in RDF fall into three classes: blank nodes; nodes with values from a small set of basic types like strings and integers; and nodes with URLs that are understood to be unique across the entire World Wide Web. There are no blank nodes in CH, and intrinsically no URLs either, although one can certainly define a URL type. There is nothing in the semantics of URLs which guarantees that each URL designates a distinct internet resource; this is just a convention which essentially fulfills itself de facto because it structures a web of commercial and legal practices, not just digital ones; for example, ownership is uniquely granted for each internet domain name. In CH, a data type may be structured to reflect institutional practices that guarantee the uniqueness of values in some context: books have unique ISBN codes; places have distinct GIS locations, etc. These uniqueness requirements, however, are not intrinsically part of CH, and need to be expressed with additional axioms. In general, a CH hypernode is a tuple of relatively simple values and any additional semantics are determined by type definitions (it may be useful to see CH hypernodes as roughly analogous to C structs—which have no a priori uniqueness mechanism).

Also, RDF types are less intrinsic to RDF semantics than in CH [103]. The foundational elements of CH are value-tuples (via nodes expressing values, whose tuples in turn are hypernodes). Tuples are indexed by position, not by labels: the tuple 〈⌈Nathaniel⌉, ⌈46⌉〉 does not in itself draw in the labels “name” or “age,” which instead are defined at the type-level (insofar as type-definitions may stipulate that the label “age” is an alias for the node in its second position, etc.). So there is no way to ascertain the semantic/conceptual intent of hypernodes without considering both hyponode and hypernode types. Conversely, RDF does not have actual tuples (though these can be represented as collections, if desired); and nodes are always joined to other nodes via labeled connectors—there is no direct equivalent to the CH modeling unit of a hyponode being included in a hypernode by position.

At its core, then, RDF semantics are built on the proposition that many nodes can be declared globally unique by fiat. This does not need to be true of all nodes—RDF types like integers and floats are more ethereal; the number 46 in one graph is indistinguishable from 46 in another graph. This can be formalized by saying that some nodes can be objects but never subjects. If such restrictions were not enforced, then RDF graphs could become in some sense overdetermined, implying relationships by virtue of quantitative magnitudes devoid of semantic content. This would open the door to bizarre judgments like “my age is non-prime” or “I am older than Mohamed Salah's 2018 goal totals.” One way to block these inferences is to prevent nodes like “the number 46” from being subjects as well as objects. But nodes which are not primitive values—ones, say, designating Mohamed Salah himself rather than his goal totals—are justifiably globally unique, since we have compelling reasons to adopt a model where there is exactly one thing which is that Mohamed Salah. So RDF semantics basically marries some primitive types that are objects but never subjects with a web of globally unique but internally unstructured values which can be either subject or object.

In CH, the “primitive” types are effectively hypotypes; hyponodes are (at least indirectly) analogous to object-only RDF nodes insofar as they can only be represented via inclusion inside hypernodes. But CH hypernodes are neither (in themselves) globally unique nor lacking in internal structure. In essence, an RDF semantics based on guaranteed uniqueness for atom-like primitives is replaced by a semantics based on structured building-blocks without guaranteed uniqueness. This alternative may be considered in the context of general versus special reasoners: since general reasoners potentially take the entire Semantic Web as their domain, global uniqueness is a more desired property than internal structure. However, since special reasoners only run on specially selected data, global uniqueness is less important than efficient mapping to domain-specific representations. It is not computationally optimal to deserialize data by running SPARQL queries.

Finally, as a last point in the comparison between RDF and CH semantics, it is worth considering the distinction between “declarative knowledge” and “procedural knowledge” (see, e.g., [80, vol. 2, pp. 182–197]). According to this distinction, canonical RDF data exemplify declarative knowledge because they assert apparent facts without explicitly trying to interpret or process them. Declarative knowledge circulates among software in canonical, reusable data formats, allowing individual components to use or make inferences from data according to their own purposes.

Counter to this paradigm, return to hypothetical Cyber-Physical examples, such as the conversion of voltage data to acceleration data, which is a prerequisite to accelerometers’ readings being useful in most contexts. Software possessing capabilities to process accelerometers therefore reveals what can be called procedural knowledge, because software so characterized not only receives data but also processes such data in standardized ways.

The declarative/procedural distinction perhaps fails to capture how procedural transformations may be understood as intrinsic to some semantic domains—so that even the information we perceive as “declarative” has a procedural element. For example, the very fact that “accelerometers” are not called “Voltmeters” (which are something else) suggests how the Ubiquitous Computing community perceives voltage-to-acceleration calculations as intrinsic to accelerometers’ data. But strictly speaking, the components that participate in USH networks are not just engaged in data sharing; they are functioning parts of the network because they can perform several widely recognized computations that are understood to be central to the relevant domain—in other words, they have (and share with their peers) a certain “procedural knowledge.”

RDF is structured as if static data sharing were the sole arbiter of semantically informed interactions between different components, which may have a variety of designs and rationales—which is to say, a Semantic Web. But a thorough account of formal communication semantics has to reckon with how semantic models are informed by the implicit, sometimes unconscious assumption that producers and/or consumers of data will have certain operational capacities: the dynamic processes anticipated as part of sharing data are hard to separate conceptually from the static data which is literally transferred. To continue the accelerometer example, designers can think of such instruments as “measuring acceleration” even though physically this is not strictly true; their output must be mathematically transformed for it to be interpreted in these terms. Whether represented via RDF graphs or Directed Hypergraphs, the semantics of shared data is incomplete unless the operations which may accompany sending and receiving data are recognized as preconditions for legitimate semantic alignment.

While ontologies are valuable for coordinating and integrating disparate semantic models, the Semantic Web has perhaps influenced engineers to conceive of semantically informed data sharing as mostly a matter of presenting static data conformant to published Ontologies (i.e., alignment of “declarative knowledge”). In reality, robust data sharing also needs an “alignment of procedural knowledge”: in an ideal Semantic Network, procedural capabilities are circled among components, promoting an emergent “collective procedural knowledge” driven by transparency about code and libraries as well as about data and formats. The CH model arguably supports this possibility because it makes type assertions fundamental to semantics. Rigorous typing both lays a foundation for procedural alignment and mandates that procedural capabilities be factored in to assessments of network components, because a type attribution has no meaning without adequate libraries and code to construct and interpret type-specific values.

Despite their differences, the Semantic Web, on the one hand, and Hypergraph-based frameworks, on the other, both belong to the overall space of graph-oriented semantic models. Hypergraphs can be emulated in RDF, and RDF graphs can be organically mapped to a Hypergraph representation (insofar as Directed Hypergraphs with annotations are a proper superspace of Directed Labeled Graphs). Semantic Web Ontologies for computer source code can thus be modeled by suitably typed DHs as well, even while we can also formulate Hypergraph-Based Source Code Ontologies as well. So, we are justified in assuming that a sufficient ontology exists for most or all programming languages. This means that, for any given procedure, we can assume that there is a corresponding DH representation which embodies that procedure's implementation.

Procedures, of course, depend on inputs which are fixed for each call, and produce “outputs” once they terminate. In the context of a graph-representation, this implies that some hypernodes represent and/or express values that are inputs, while others represent and/or express its outputs. These hypernodes are abstract in the sense (as in Lambda Calculus) that they do not have a specific assigned value within the body, qua formal structure. Instead, a runtime manifestation of a DH (or equivalently a CH, once channelized types are introduced) populates the abstract hypernodes with concrete values, which in turn allows expressions described by the CH to be evaluated.

These points suggest a strategy for unifying Lambda calculi with Source Code Ontologies. The essential construct in λ-calculi is that mathematical formulae include “free symbols” which are abstracted: sites where a formula can give rise to a concrete value, by supplying values to unknowns; or give rise to new formulae, via nested expressions. Analogously, nodes in a graph-based source-code representation are effectively λ-abstracted if they model input parameters, which are given concrete values when the procedure runs. Connecting the output of one procedure to the input of another—which can be modeled as a graph operation, linking two nodes—is then a graph-based analog to embedding a complex expression into a formula (via a free symbol in the latter).

Carrying this analogy further, I earlier mentioned different λ-Calculus extensions inspired by programming-language features such as object-orientation, exceptions, and by-reference or by-value captures. These, too, can be incorporated into a Source Code Ontology: for example, the connection between a node holding a value passed to an input parameter node, in a procedure signature, is semantically distinct from the nodes holding “Objects” which are senders and receivers for “messages,” in Object-Oriented parlance. Variant input/output protocols, including objects, captures, and exceptions, are certainly semantic constructs (in the computer-code domain) which Source Code Ontologies should recognize. So we can see a convergence in the modeling of multifarious input/output protocols via λ-Calculus and via Source Code Ontologies. I will now discuss a corresponding expansion in the realm of applied Type Theory, with the goal of ultimately folding type theory into this convergence as well.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128168011000037

How is the term situated cognition approach related to the concept of knowledge?

Situated cognition is a theory that posits that knowing is inseparable from doing by arguing that all knowledge is situated in activity bound to social, cultural and physical contexts.

What is well organized knowledge about the world called?

"Semantic memory refers to our organized knowledge about the world." graceful degradation.

Where is knowledge located in the parallel distributed processing model quizlet?

general knowledge is stored in sensory memory and working memory, rather than in long-term memory. One feature of the parallel distributed processing approach is called spontaneous generalization.

In what way is the topic of boundary extension related to the topic of schemas?

In what way is the topic of boundary extension related to the topic of schemas? Both refer to situations where we can fill in missing information, either visual information or verbal information.

Toplist

Neuester Beitrag

Stichworte