A control on a form or report that contains descriptive information, usually a field name or title

Requirements Analysis and Conceptual Data Modeling

Toby Teorey, ... H.V. Jagadish, in Database Modeling and Design (Fifth Edition), 2011

Entity Contents

Entities should contain descriptive information. If there is descriptive information about a data element, the data element should be classified as an entity. If a data element requires only an identifier and does not have relationships, it should be classified as an attribute. With city, for example, if there is some descriptive information such as country and population for cities, then city should be classified as an entity. If only the city name is needed to identify a city, then city should be classified as an attribute associated with some entity, such as Project. The exception to this rule is that if the identity of the value needs to be constrained by set membership, you should create it as an entity. For example, “state” is much the same as city, but you probably want to have a State entity that contains all the valid State instances. Examples of other data elements in the real world that are typically classified as entities include Employee, Task, Project, Department, Company, Customer, and so on.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780123820204000045

Information Risk Assessment

Timothy Virtue, Justin Rainey, in HCISPP Study Guide, 2015

Task 1-4 Identify Information Sources

Identify the sources of descriptive, threat, vulnerability, and impact information to be used in the risk assessment. Descriptive information enables organizations to be able to determine the relevance of threat and vulnerability information.

At Tier 1, descriptive information can include:

The type of risk management and information security governance structures in place within organizations; and

How the organization identifies and prioritizes critical missions/business functions.

At Tier 2, descriptive information can include information about:

Organizational mission/business processes, functional management processes, and information flows;

Enterprise architecture, information security architecture, and the technical/process flow architectures of the systems, common infrastructures, and shared services that fall within the scope of the risk assessment; and

The external environments in which organizations operate including the relationships and dependencies with external providers.

Such information is typically found in architectural documentation (particularly documentation of high-level operational views), business continuity plans, and risk assessment reports for organizational information systems, common infrastructures, and shared services that fall within the scope of the risk assessment.

At Tier 3, descriptive information can include information about:

The design of and technologies used in organizational information systems;

The environment in which the systems operate;

Connectivity to and dependency on other information systems; and

Dependencies on common infrastructures or shared services.

Such information is found in system documentation, contingency plans, and risk assessment reports for other information systems, infrastructures, and services. Sources of information can be either internal or external to organizations. Internal sources of information that can provide insights into both threats and vulnerabilities can include risk assessment reports, incident reports, security logs, trouble tickets, and monitoring results. Note that internally, information from risk assessment reports at one tier can serve as input to risk assessments at other tiers. Mission/business owners are encouraged to identify not only common infrastructure and/or support services they depend on but also those they might use under specific operational circumstances. External sources of threat information can include cross-community organizations [e.g., US Computer Emergency Readiness Team (US-CERT), Information Sharing and Analysis Centers (ISACs) for critical infrastructure sectors], research and nongovernmental organizations (e.g., Carnegie Mellon University, Software Engineering Institute – CERT), and security service providers. Organizations using external sources should consider the timeliness, specificity, and relevance of threat information. Similar to sources of threat information, sources of vulnerability information can also be either internal or external to organizations. Information about predisposing conditions can be found in a variety of sources including descriptions of information systems, operational environments, shared services, common infrastructures, and enterprise architecture. Sources of impact information can include mission/business impact analyses, information system component inventories, and security categorizations. Security categorization constitutes a determination of the potential impacts should certain events occur that jeopardize the information and information systems needed by the organization to accomplish its assigned missions, protect its assets, fulfill its legal responsibilities, maintain its day-to-day functions, and protect individuals. Security categories are to be used in conjunction with vulnerability and threat information in assessing the risk to organizational operations and assets, individuals, and other organizations. Security categories constitute an initial summary of impact in terms of failures to meet the security objectives of confidentiality, integrity, and availability, and are informed by types of harm.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128020432000069

On the Use of Unsupervised Techniques for Fraud Detection in VoIP Networks

Yacine Rebahi, ... Pascal Lorenz, in Emerging Trends in ICT Security, 2014

Call data records

Every time a call is placed on a telecommunication network, descriptive information about the call is saved as a Call Data Record (CDR). Millions of CDRs are generated and stored every day. In addition, at minimum, each Call Data Record has to include the originating and terminating phone numbers, the date and time of the call, and the duration of the call. The CDRs might also include other kinds of data that are not necessary but are useful for billing, for instance, the identifier of the telephone exchange writing the record, a sequence number identifying the record, the result of the call (whether it was answered, busy, etc.), the route by which the call entered the exchange, any fault condition encountered, and any features used during the call, such as call waiting. An example of CDRs made available by a VoIP provider and used for testing (see the final section for more details) include the fields: Time (the start time of call); SIP Response Code: 2xx, 3xx, 4xx, 5xx or 6xx; SIP Method: INVITE (mainly); User-name (From URI); To URI; To-Tag; From-Tag; User-Agent; Source IP; RPID (Remote Party ID); and Duration.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780124114746000220

UDDI

James McGovern, ... Sunil Mathew, in Java Web Services Architecture, 2003

businessService

In UDDI, a businessService entry indicates a logical service and holds descriptive information about a Web service in business terms. A businessService is a child of a businessEntity that provides the service. Information about how a businessService can be instantiated is contained within a bindingTemplate.

Each businessService is has a unique identifier. This value is assigned by each UDDI operator and cannot be edited by the publisher. It also contains the key to its parent businessEntity. Let us look at an example:

Figures 6.9 and 6.10 show the elements of a businessEntity and businessService. In both examples, name is required, but its description is optional. The bindingTemplate element is required and specifies technical information on various implementations of the service. We will expand on this later. The category bag is used similarly to businessService and allows for the service to be classified using multiple taxonomies.

Figure 6.9. businessService

Figure 6.10. bindingTemplate

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9781558609006500099

Social Applications

Pawan Vora, in Web Application Design Patterns, 2009

How

Typically, adding tags to a content item is straightforward. To enter tags, let users enter keywords separated by a space or a comma (or another delimiter) in a text field. Using space as a delimiter may be problematic when users want to enter multiword tags. Therefore, use commas, semicolons, or other special characters are better delimiters. In addition, allow users to tag both the content they are adding and the content that already exists (Figure 9.7).

Figure 9.7. Flickr allows users to add tags to photos they upload.

KEEP TAGGING OPTIONAL

The main purpose of tagging is to allow users to provide some descriptive information about content to facilitate finding it in the future. Because the primary user task is to add content, tagging (or providing other descriptive information) should be optional. However, users should be permitted to add tags later.

ALLOW USERS TO TAG SEVERAL ITEMS TOGETHER

For content such as photos, users may want to add the same tags to several items. Allow them to select items that will share the same tags and apply tags to them in “bulk” or “batch” mode (Figure 9.8).

Figure 9.8. Flickr allows users to apply tags in a “batch” mode. Users can batch photos that they want to tag and then click “Add Tags” to add descriptions to all the items in the batch.

SUGGEST TAGS TO MINIMIZE VARIABILITY

One of the problems with tagging is that items may be tagged using seemingly similar labels caused by typos, plurals, or minor differences in spellings (e.g., color versus colour). For example, one user may label an item as “web site,” another as “website,” and yet another as “web_site” or “websites.” By suggesting tags, users can pick from existing tags, and the application can minimize redundancy and unnecessary distinctions in tags.

In addition, suggesting tags may also make users consider alternative ways to describe content and avoid conservative labels among users new to tagging. Suggestions may be in the form of the following (Smith, 2007):

Previously used tags. Tags that the user has entered already.

Popular tags. Tags that have been used frequently by others.

Recommended tags. Tags the user should consider based on popular tags, recently used tags, and other factors.

To make it easy to add suggested tags, allow users to select from a list (Figure 9.9). While entering tags, they may be provided with suggested tags, using the AUTOSUGGEST/AUTOCOMPLETION rich-interaction pattern (see Chapter 8).

Figure 9.9. Del.icio.us both recommends tags and lists popular ones for users to consider when adding a bookmark and tagging it. To use one or more tags, users just have to click on them, and those tags are populated in the “tags” text field.

ALLOW USERS TO CHANGE AND DELETE THEIR TAGS

Users may want to change their tags because they made a mistake or have found other tags that better describe the content. Also, if users have tagged a content item to describe an action they are going to take (e.g., labeling an item “to do” or “urgent” in Gmail), they may want to remove those tags if they are no longer relevant. To accommodate such needs, allow users to remove, change, or add tags to an existing item (Figure 9.10).

Figure 9.10. Del.icio.us allows users to change or delete tags by clicking “edit” next to the bookmarked item.

Managing tags should be possible in batch mode as well—that is, users should be able to change or delete tags for multiple items at the same time. If it would help users, allow them to replace one tag with one or more tags.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780123742650000098

Process

Enda Ridge, in Guerrilla Analytics, 2015

17.2.3 Common Information

The analytics workflows are actually very similar and have a lot of descriptive information in common. This descriptive information is needed when the work product is created, when it is worked on, and when it is reviewed. Some typically useful information to track includes the following.

Creating the work product: This is the initial registration of the work product in the workflow tracking system.

The customer who requested this work product. This facilitates rapid interaction with the customer and allows themes to emerge.

The project work stream that will use the work product.

When the work product was requested.

The UID of the work product.

Which team member is assigned to complete the work product.

Completing the work product: This is the activity of doing the actual work on the work product. Keeping with the Guerrilla Analytics principles, interruptions to doing actual work should be minimized. That said, it is useful to track the following information while completing a work product.

Any helpful comments on lessons learned, exceptions, and other context that would help somebody else understand the work product. For example, in a modeling activity, a team member should note their observations about the data and their rationale for choosing a particular model. This is helpful if the original executing team member is no longer on the team or is not available.

Any conversations with the customer that helped shape or change the work product as originally described. This is useful if particular design decisions and interpretations of outputs are questioned in the future.

Review of the work product: This is the activity of reviewing the completed work product. Here it is important to capture.

Who reviewed the work product. This person is different from the work product executor.

When the reviewer did their review.

Any comments and feedback on the work product. It is important to capture this information so the evolution of the work product can be understood and also so that junior team members have a record they can refer back to and learn from.

The information above greatly helps in tracking and understanding a work product as well as in any handover of work to other team members.

Given that there are differences between the three types of analytics workflow (data tracking, work products, and build activities), so there are some differences in the information that should be captured about those workflows.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128002186000175

Risk Management Framework Planning and Initiation

Stephen D. Gantz, Daniel R. Philpott, in FISMA and the Risk Management Framework, 2012

Inputs to Information System Categorization

The first step in the RMF focuses on categorizing the information system and the information the system uses based on the impact to the organization that would result from a loss of confidentiality, integrity, or availability. As described in more detail in Chapter 7, this step requires system owners to consider each of the different information types relevant to the system and the system overall, the risks to those information assets, and the severity of adverse events that might occur. Information owners have responsibility for identifying and describing the information types relevant for each system. This information is used not only in security categorization but also in risk assessment and privacy impact assessment activities that inform security categorization. System owners and information owners also need to consider any organizational standards regarding information type definitions and security categorization levels, if such standards have been defined. Some agencies specify information types at an organization-wide level and direct system owners to use those specifications in their own security categorization efforts, while other agencies leave information type identification to each system owner [12]. With organizational standards in place, system owners typically have the ability to assign a security categorization equal to or higher than the standard set for each information type. Inventories of information types may be developed as part of the enterprise architecture—where security categorization levels are among the attributes documented in enterprise data models—or by information security or risk management programs. Such standards simplify the security categorization task by identifying all information types used by the organization and evaluating their impact levels, reducing the need for system owners to consult federal guidance on security categorization for generic information types or to define their own information types.

In addition to security categorization, the first step in the RMF also captures descriptive information about the system that is reflected in the system security plan and used in subsequent activities. Functional and technical information about a system may appear in a variety of different documents and SDLC artifacts depending on the methodology being followed, including business cases, functional specifications, system description documents, system design documents, or concepts of operations. Regardless of the terminology used, such documents typically specify the purpose and objectives for the system, the organization deploying it, its intended users, organizational roles and responsibilities, and its operational characteristics. System owners and information security officers use system description documentation throughout the RMF process, both to summarize key information in the system description and purpose section of the system security plan [13] and as a point of reference to validate that selected and implemented security controls satisfy the security objectives for the system and its intended use. System description documentation is especially important for completing the information system description task in step 1 of the RMF, in which system owners document identifying information and operational details about the system, define the system boundary, and formally establish the ownership and other key roles and responsibilities for the system and the information it contains [14]. Documentation produced in the initiation phase of the SDLC also typically provides identifying information about the system used to register it in appropriate system inventories. In many cases, systems will already have unique system or investment identifiers assigned prior to initiating the RMF, but during step 1 system owners need to make sure that the system is accurately reflected in security program documentation such as the FISMA inventory of systems each agency reports to OMB.

Note

Special Publication 800-53 Revision 3 defines a security concept of operations (CONOPS) as an optional control enhancement to the system security plan (the artifact is proposed as an optional control within the planning family in the draft Special Publication 800-53 Revision 4). This document describes how the organization will operate the system from a security perspective, including the system purpose, its architecture, the schedule for authorizing the system, and the security categorization determined for the system [15]. The security CONOPS explicitly focuses on security, although it may include much of the same descriptive information found in a system’s overall concept of operations.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9781597496414000060

Metadata and information management

Margot Note, in Managing Image Collections, 2011

Metadata

Metadata is structured data about data, information that facilitates the management and use of other information. The function of metadata is to provide users with a standardized means for intellectual access to holdings. Metadata standards for digital information “can assist by facilitating the transfer of information between hardware and software platforms as technologies evolve.… Resources which are encoded using open standards have a greater chance of remaining accessible after an extended period than resources encoded with proprietary standards” (Preserving Access to Digital Information 2001). However, “it is not enough to use some metadata standard. A metadata standard appropriate to the materials in hand and the intended end-users must be selected” (Baca 2003, 54).

Metadata can identify the name of the work, who created it, who reformatted it, and other descriptive information. It can also provide unique identification of and links to organizations, files, or databases that have more extensive descriptive metadata about the image. Deegan and Tanner (2002) state that metadata is a critical component:

of digital resource development and use, and is needed at all stages in the creation and management of the resource. Any creator of digital objects should take as much care in the creation of the metadata as they do in the creation of the data itself—time and effort expended at the creation stage recording quality metadata is likely to save users much grief, and to result in a well-formed digital object that will survive for the long term. Well-formed metadata is the most efficient and effective tool for managing and finding objects in the complex information spaces with which libraries are now dealing. (114)

Unfortunately, no system has yet been widely adopted for tracking the digitization activities of libraries, archives, and museums. The prudent course for information professionals is to understand the current challenges, emerging principles, and best practices before implementing any particular metadata solution. Baca (2003) notes:

Choosing the most appropriate metadata standard or standards for describing particular collections or materials is only the first step in building an effective, usable information resource. Unless the metadata elements or data structure are populated with the appropriate data values (terminology), the resource will be ineffectual and users will not be able to find what they are looking for, even if it is actually there. Again, there is no one-stop-shopping for the appropriate vocabulary tool for any given project. Rather, builders of information resources should select from the menu of vocabularies most appropriate for describing the providing access points to their particular collections. (52)

There are four types of metadata: administrative, descriptive, preservation, and technical.

Administrative metadata captures the context necessary to understand information resources. It documents:

the life cycle of [a] … resource, including data about ordering, acquisition, maintenance, licensing, rights, ownership and provenance. It is essential that the provenance (custodial history) of a digital image object is recorded from, where possible, the time of its creation through all successive changes in custody of ownership. Users and curators must be provided with a sound basis for confidence that a digital image is exactly what it is purported to be.… There should be a clear audit trail of all changes (Anderson et al. 2006, 74).

Box 5.2

Best practices for creating metadata

When determining which metadata schema to use, take into account the needs and search preferences of users and collaborators.

Use existing institutional indexes and finding aids as a basis for metadata.

Metadata creation by professional staff may take less time and require less quality control than training others, even for a small project.

Base the metadata schema on the existing published schema closest to the institution’s needs. The schema can be adapted by adding or omitting elements.

Use the same name and definition of each element as in the published schema, so as to avoid confusion.

A set of compatible cataloging rules should accompany the schema. For example, if the schema includes elements drawn from VRA Core (see Box 5.3), the corresponding rules from CCO (see Box 5.4) should be followed.

Box 5.3

Data structure standards

CDWA and CDWA Lite (Categories for the Description of Works of Art)

www.getty.edu/research/conducting_research/standards/cdwa/index.html

These rules “describe the content of art databases by articulating a conceptual framework for describing and accessing information about works of art, architecture, other material culture, groups and collections of works, and related images.”

DC (Dublin Core Metadata Element Set)

//dublincore.org/documents/dces/

DC is “a vocabulary of fifteen properties for use in resource description. The name ‘Dublin’ is due to its origin at a 1995 invitational workshop in Dublin, Ohio; ‘core’ because its elements are broad and generic, usable for describing a wide range of resources.”

EAC-CPF (Encoded Archival Context-Corporate Bodies, Persons, and Families)

//eac.staatsbibliothek-berlin.de/

EAC-CPF “primarily addresses the description of individuals, families and corporate bodies that create, preserve, use and are responsible for and/or associated with records in a variety of ways.”

EAD (Encoded Archival Description)

www.loc.gov/ead

EAD was developed to “investigate the desirability and feasibility of developing a nonproprietary encoding standard for machine-readable finding aids such as inventories, registers, indexes, and other documents created by archives, libraries, museums, and manuscript repositories to support the use of their holdings.”

IPTC Photo Metadata Standard (International Press Telecommunications Council)

www.iptc.org/std/photometadata/specification/IPTC-PhotoMetadata(200907)_1.pdf

“IPTC Photo Metadata provides data about photographs and the values can be processed by software.”

MADS (Metadata Authority Description Schema)

www.loc.gov/standards/mads

“MADS is a MARC21-compatible XML format for the type of data carried in records in the MARC Authorities format.… Consistency with MODS [see below] was a goal as much as possible.”

MARC21 (Machine-Readable Cataloging)

www.loc.gov/marc/

“MARC formats are standards for the representation and communication of bibliographic and related information in machine-readable form.”

METS (Metadata Encoding & Transmission Standard)

www.loc.gov/standards/mets/

METS is a “standard for encoding descriptive, administrative, and structural metadata regarding objects within a digital library.”

MODS (Metadata Object Description Schema)

www.loc.gov/standards/mods/

MODS is a “schema for a bibliographic element set that may be used for a variety of purposes, and particularly for library applications.”

VRA Core 4.0

www.vraweb.org/projects/vracore4/

VRA Core 4.0 is a “data standard for the cultural heritage community that was developed by the Visual Resources Association’s Data Standards Committee. It consists of a metadata element set (units of information such as title, location, date, etc.), as well as an initial blueprint for how those elements can be hierarchically structured. The element set provides a categorical organization for the description of works of visual culture as well as the images that document them.”

Box 5.4

Data content standards

AACR2 (Anglo-American Cataloguing Rules, Second Edition)

www.aacr2.org/

These rules are “designed for use in the construction of catalogues and other lists in general libraries of all sizes. The rules cover the description of, and the provision of access points for, all library materials commonly collected at the present time.”

CCO (Cataloging Cultural Objects)

www.vrafoundation.org/ccoweb/

“CCO covers many types of cultural objects, including architecture, archaeological sites and artifacts, and some functional objects from the realm of material culture, but its primary emphasis is on art, including but not limited to paintings, sculptures, prints, manuscripts, photographs, and other visual media.”

DACS (Describing Archives: A Content Standard)

www.archivists.org/governance/standards/dacs.asp

These rules are used “to create any type or level of description of archival and manuscript materials including catalog records and finding aids.”

Graphic Materials

www.loc.gov/rr/print/gm/graphmat.html

“For groups of pictures as well as individual items, the guidelines cover transcribing and devising titles; stating creators, producers, and dates; expressing quantities, media, and dimensions; and writing subject, user advisory, and other kinds of notes.”

ISAAR (CPF) (International Standard Archival Authority Record for Corporate Bodies, Persons, and Families)

www.icacds.org.uk/eng/ISAAR(CPF)2ed.pdf

“This standard provides guidance for preparing archival authority records which provide descriptions of entities (corporate bodies, persons and families) associated with the creation and maintenance of archives.”

ISAD(G) (International Standard Archival Description)

www.ica.org/en/node/30000

“This standard provides general guidance for the preparation of archival descriptions. It is to be used in conjunction with existing national standards or as the basis for the development of national standards.”

RAD (Rules for Archival Description)

www.cdncouncilarchives.ca/archdesrules.html

“These rules aim to provide a consistent and common foundation for the description of archival material based on traditional archival principles. The rules can be applied to the description of archival fonds, series, collections, and discrete items.”

Granularity is important. Subdivide elements into the smallest sub-elements needed. Elements can always be merged later.

Set up editorial and quality control procedures to ensure that the catalog entries conform to the rules.

Test the schema and the rules thoroughly before it is too late to change them. Discover if the metadata displays correctly on the institution’s website and if users are satisfied with the catalog entries.

Plan where to store the metadata: embedded within the image file, in a separate metadata database, or both.

Be prepared for change as time passes, and design systems accordingly. For example, more metadata elements or controlled vocabularies may need to be added as the collection expands.

Descriptive metadata attempts to capture the intellectual attributes of the information resource, enabling users to locate, distinguish, and select suitable images on the basis of their subjects.

Preservation metadata is the information about an information resource used to protect it from deterioration or destruction.

Technical metadata assures that the information content of a digital file can be resurrected even if traditional viewing applications associated with that file are no longer available.

Metadata can be embedded in digital images or stored separately. Embedding metadata within the image it describes ensures that the metadata will not be lost, obviates problems of linking between data and metadata, and helps to ensure that the metadata and image will be updated together. Storing metadata separately can simplify the management of the metadata itself and facilitate search and retrieval. Metadata is commonly stored in a database system and linked to the images it describes.

Creating and recording metadata is one of the major costs of image digitization. Although pictorial images can be digitized without cataloging, a digital image collection cannot be created and delivered without metadata. Providing sufficient metadata in a timely, efficient manner for an abundance of digital resources can create a bottleneck in a digital project’s workflow. Terras (2008) notes:

Creating and maintaining metadata about objects[,] and in particular digital information objects, is obviously time-consuming and costly, and a tension exists between the two metadata functions of discovery aid and resource description: metadata creators have to provide enough information to be useful, but cannot afford to be exhaustive. (166)

Sutton (2008) seconds this:

The biggest challenge [is] balancing the ideal scenario of comprehensive description with the more practical scenario of “good enough” description. The major factors influencing this equation were the limited resources available for digitization in terms of staff, time, and funding. (29)

Puglia (1999) notes that cataloging and indexing can account for nearly a third of the overall cost of a project. Arms (1996) estimates that it would take the cataloging staff of the University of Berkeley libraries 400 years to catalog their collection of 3.5 million images. Lagoze and Payette (2000) add that these costs present:

considerable challenges to the economics of traditional library cataloging, which creates metadata records characterized by great precision, detail, and professional intervention. Some estimates figure each original library catalog record at $50–70. This high price is impractical in the context of the growth of networked resources and less expensive alternatives are needed for many of these resources. (85)

Metadata creation requires both organizational and subject expertise in order to describe images effectively. Organizational expertise refers to the ability to apply the correct structure, syntax, and use of metadata elements, while subject expertise refers to the ability to generate meaningful descriptions of the images for users. High-quality metadata utilizing both types of expertise is integral to the effective searching, retrieval, use, and preservation of digital resources.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9781843345992500052

SQL Server 2000 Overview and Migration Strategies

In Designing SQL Server 2000 Databases, 2001

Extended Properties

Many smaller database applications such as Microsoft Access have been using extended properties for some time. Extended properties allow the developer to define additional descriptive information for database objects and object components such as table columns or indexes. This additional information can be used by client applications or developers to understand the object and to specify characteristics such as the display format for a character column. SQL Server provides three system-stored procedures, sp_addexptendedproperty, sp_dropextendedproperry, and spupdateextendedproperty, and one system function, fn_listextendedproperty, to manage and read extended properties.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B978192899419050004X

The challenge of enhancing traditional services

Luisa Alvite, Leticia Barrionuevo, in Libraries for Users, 2011

Enrichment of the bibliographical records

This is one of the repeated demands required for the improvement of the OPACs, as has been said before. Peis (2000) points out that the increase in the thematic information relative to the monographs in the catalogues began in the 1970s. The project for the thematic enrichment of bibliographical records, known as SAP (Subject Access Project), directed by Atherton tends to be considered as the starting point for others such as the SAP-Sweden or the Mercury project from the Carnegie Mellon University.

In 1992 in the Library of Congress the BEAT (Bibliographic Enrichment Advisory Team)13 was created to research and undertake initiatives aimed at broadening the usefulness of the bibliographical records. Among the first actions, several projects focused on the enrichment of the bibliographical records were included, incorporating the tables of contents.

Peis (2000) himself noticed that one of the most important and urgent possibilities of improvement of the online catalogue was the inclusion of descriptive information, which could come from the table of contents. Pappas and Herendeen (2000) highlight three great advantages derived from the incorporation of this type of information into the catalogue:

1.

Table of contents helps the users to determine the relevance of the titles with respect to their information needs.

2.

The contents page improves enormously the effectiveness of the information retrieval.

3.

They are used as a complement to the subject cataloguing.

Duchemin (2005) indicates as enrichment elements: cover, table of contents, back cover, abstract, author bibliography and even excerpts from the work. Recently Powell (2008) has shown in his work the integration in the OPAC of the University of Michigan of digitalised materials in its collaborative partnership with Google. For their part, Byrum and Williamson (2006) allude in their study to results in which it is confirmed that the level of use of the enriched records in the library is far higher than that of the current records; it has even been proved that the same record may increase its use by 45 per cent when it includes the table of contents.

It is worth mentioning the Catalog Enrichment Initiative,14 an initiative conceived in 2004 with the aim of encouraging the coordinated creation of tables of contents data from older publications, a collaborative project directed by Robert Kieft – Haverford College – in which some institutions such as OCLC, RLG, Library of Congress, etc. take part. The recent report from OCLC (Cahoun et al., 2009) emphasises the first-order importance that users give to data enrichment. Users wish to reduce the difference between the description of the bibliographical record and the item itself. We agree that libraries need to work together to share the costs involved in catalogue data enrichment.

The positive influence of online libraries allows that users and the library community consider at this moment as a fundamental added value the incorporation of the elements indicated in the OPACs; moreover, the ILSs have the technological solutions that make it possible and that allow the incorporation of enriched items. At this moment the preferential forms for the record enrichment are the purchase of the contents from the usual ILS vendor of the centre and the use of APIs which act on publishing companies and/or service providers such as Google or Amazon.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9781843345954500042

What is the name for a control on a form or in a report that displays the data from a field in a table or query?

Bound control A control whose source of data is a field in a table or query is called a bound control. You use bound controls to display values that come from fields in your database. The values can be text, dates, numbers, Yes/No values, pictures, or graphs.

Which field property allows you to display a name in forms and reports that is different than the field name?

Chapter 1.

What is the bottom part of the screen that displays the field names for a query?

The bottom part of Query Design view is called the design grid. It contains a table that lists all of the fields included in the query.

Which view is used to make design changes to a form while the form is displaying data?

In Layout view, you can make design changes to the form while it is displaying data. When you use the Multiple Items tool, the form that Access creates resembles a datasheet. The data is arranged in rows and columns, and you see more than one record at a time.

Toplist

Neuester Beitrag

Stichworte