Glossary of Terms

Terminology Definition
Accreditation A formal recognition that an organization (Labelling Authority) is competent to carry out specific tasks or specific types of tests. [Glossary of Quality Assurance Terms]
Accredited Authority
(Labelling Authority)
An accredited authority (Labelling Authority) is an organization which has been evaluated and given approval to perform a specified measurement or task, in our case, certification of Health-related Web Resources. [Glossary of Quality Assurance Terms]
(of Data Quality)
The process of testing and evaluation against a set of predefined quality criteria; the evaluation of data, comprised of data validation/verification and data quality assessment, to establish whether they meet the quality criteria needed for a specific application. [Glossary of Quality Assurance Terms]
Certified (Labelled) Web Resources Web resources that have one or more of their property values established by a technically valid procedure and is accompanied by or traceable to a certificate (label) issued by a certifying body (Labelling Authority).
Data Quality Data Quality refers to the totality of features and characteristics of data that bears on their ability to satisfy a given purpose; the sum of the degrees of excellence for factors related to data. [Glossary of Quality Assurance Terms]
Data Quality Indicators

Qualitative descriptors that are used to interpret the degree of acceptability or utility of data to the user. [Glossary of Quality Assurance Terms]
Information Extraction Automatic assignment of meaning to elementary textual entities and possibly more complex structured objects.
Metadata Data that describes information about either online or offline data. Information that characterizes the who, what, where, and how related to data collection. Often, the information refers to special tagged fields in a document that provide information about the document to search engines and other computer applications. Web pages often include metadata in the form of meta tags. Description and keywords meta tags are commonly used to describe the Web page's content. Most search engines use this data when adding pages to their search index.
Resource Description Framework (RDF) Resource Description Framework (RDF) is a family of World Wide Web Consortium (W3C) specifications originally designed as a metadata data model, but which has come to be used as a general method of modeling information through a variety of syntax formats. The RDF metadata model is based upon the idea of making statements about Web resources in the form of subject-predicate-object expressions, called triples in RDF terminology. The subject denotes the resource, and the predicate denotes traits or aspects of the resource and expresses a relationship between the subject and the object.
Semantic Web The Semantic Web is an evolving extension of the World Wide Web in which the semantics of information and services on the web is defined, making it possible for the web to understand and satisfy the requests of people and machines to use the web content. It derives from W3C director Tim Berners-Lee's vision of the Web as a universal medium for data, information, and knowledge exchange. At its core, the semantic web comprises a set of design principles, collaborative working groups, and a variety of enabling technologies. Some elements of the semantic web are expressed as prospective future possibilities that have yet to be implemented or realized. Other elements of the semantic web are expressed in formal specifications.
Web Mining Web mining is the application of data mining techniques to discover patterns from the Web. According to analysis targets, web mining can be divided into three different types, which are Web usage mining, Web content mining and Web structure mining. Web usage mining is the application that uses data mining to analyse and discover interesting patterns of user’s usage data on the web. Web content mining is the process to discover useful information from the content of a web page. The type of the web content may consist of text, image, audio or video data in the web. Web structure mining is the process of using graph theory to analyse the node and connection structure of a web site.