ADO.NET is a data access technology from the Microsoft .NET Framework that provides communication between relational and non-relational systems through a common set of components. ADO.NET is a set of computer software components that programmers can use to access data and data services from a database. It is a part of the base class library that is included with the Microsoft .NET Framework.
- column-level security
Column-Level Security (CLS) enables access control to database table columns based on the user’s execution context or their group membership.
- columnar database
A columnar database stores data by columns rather than by rows, which makes it more suitable for analytical query processing, and thus for data warehouses.
- data anonymization
Data anonymization has been defined as a “process by which personal data is irreversibly altered in such a way that a data subject can no longer be identified directly or indirectly, either by the data controller alone or in collaboration with any other party.” Data anonymization enables the transfer of information across a boundary, such as between two departments within an agency or between two agencies, while reducing the risk of unintended disclosure, and in certain environments in a manner that enables evaluation and analytics post-anonymization.
- data connection
- data source connection
Data source connection is an object that holds all the necessary information like server name or url, user credentials or tokens, that allow for authentication to the data source, metadata extraction and querying of data.
- data consumer
- data consumers
A person or system that consumes data from Querona.
- data masking
Data masking or data obfuscation is the process of hiding original data with modified content (characters or other data.) The main reason for applying masking to a data field is to protect data that is classified as personal identifiable data, personal sensitive data or commercially sensitive data, however the data must remain usable and look real and consistent.
- data provider
Data provider is a library either built-in or custom, that provides necessary implementation that allows for low-level connectivity to a data source.
- data pseudonymization
Data pseudonymization is a data management and de-identification procedure by which personally identifiable information fields within a data record are replaced by one or more artificial identifiers, or pseudonyms. A single pseudonym for each replaced field or collection of replaced fields makes the data record less identifiable while remaining suitable for data analysis and data processing Pseudonymization can be one way to comply with the European Union’s new General Data Protection Regulation demands for secure data storage of personal information. Pseudonymized data can be restored to its original state with the addition of information which then allows individuals to be re-identified, while anonymized data can never be restored to its original state. The pseudonym allows tracking back of data to its origins, which distinguishes pseudonymization from data anonymization, where all person-related data that could allow backtracking has been purged.
- data source
Data source is any of the following types of sources for digitized data: a database, a file, a data stream, and others.
- data virtualization
Data virtualization is a technology that provides an abstraction layer that hides most of the technical aspects of how and where data resides, is stored and processed, allowing to access data irrespective to what interfaces and technologies are needed at the data sources.
- data warehouse
A data warehouse (DW or DWH), also known as an enterprise data warehouse (EDW), is a system used for reporting and data analysis, and is considered a core component of business intelligence. DWs are central repositories of integrated data from one or more disparate sources. They store current and historical data in one single place that are used for creating analytical reports for workers throughout the enterprise. The data stored in the warehouse is collected from the operational systems (such as marketing or sales). The data may pass through an operational data store and may require data cleansing for additional operations to ensure data quality before it is used in the DW for reporting.
Database Management System.
Data Definition Language. Those statements in SQL that define, as opposed to manipulate, data. For example, CREATE TABLE, CREATE INDEX, GRANT, and REVOKE.
Data Manipulation Language. Those statements in SQL that manipulate, as opposed to define, data. For example, INSERT, UPDATE, DELETE, and SELECT.
- dynamic data masking
Dynamic data masking (DDM) is real-time data masking of data. DDM changes the data stream so that the data consumer does not get access to the sensitive data, while no physical changes to the original data take place.
- integration virtual database
A virtual database that supports data caching using one of the supported data processing systems - usually a DBMS or cloud service.
Java Database Connectivity (JDBC) is an application programming interface (API) for the programming language Java, which defines how a client may access a database.
A set of data that describes and gives information about other data.
There are three main types of metadata according to NISO definitions:
Descriptive metadata describes a resource for purposes such as discovery and identification. It can include elements such as title, abstract, author, and keywords.
Structural metadata indicates how compound objects are put together, for example, how pages are ordered to form chapters.
Administrative metadata provides information to help manage a resource, such as when and how it was created, file type and other technical information, and who can access it.
Open Database Connectivity (ODBC) is a standard application programming interface (API) for accessing database management systems (DBMS).
- OLE DB
OLE DB (Object Linking and Embedding, Database, sometimes written as OLEDB or OLE-DB), an API designed by Microsoft, allows accessing data from a variety of sources in a uniform manner. The API provides a set of interfaces implemented using the Component Object Model (COM); it is otherwise unrelated to OLE.
- pass-through virtual database
A virtual database that is uses the direct connection to a data source. All queries that utilize this VDB type are translated into the data access technology supported by the source, for example the SQL Dialect or API call, and execute on the data source without any caching.
- row-level security
Row-Level Security (RLS) enables you to use group membership or execution context to control access to rows in a database table. In Querona, RLS supports the filter predicates that silently filter the rows available to read operations.
- Apache Spark
Apache Spark is a unified analytics engine for large-scale data processing.
- static data masking
With Static Data Masking, the user configures how masking operates for each column selected inside the database. Static Data Masking will then replace data in the database copy with new, masked data generated according to that configuration. Original data cannot be unmasked from the masked copy.
Transact-SQL (T-SQL) is Microsoft’s and Sybase’s proprietary extension to the SQL (Structured Query Language) used to interact with relational databases. T-SQL expands on the SQL standard to include procedural programming, local variables, various support functions for string processing, date processing, mathematics, etc. and changes to the DELETE and UPDATE statements.
Transact-SQL is central to using Microsoft SQL Server. All applications that communicate with an instance of SQL Server do so by sending Transact-SQL statements to the server, regardless of the user interface of the application.
In Querona, we refer to T-SQL to also indicate the specific SQL dialect that stands behind SQL Server and its emulation built-into Querona.
- virtual database
A virtual database is a type of database management system that serves as a container to transparently view and query several other databases through a uniform API that culls from multiple sources as if they were a single entity. These databases are connected via a computer network and then accessed as if they are from a single database. A virtual database’s goal is to be able to view and access data in a unified way without needing to copy and duplicate it in several databases or manually combine the results from many queries. Each of the combined databases in the system is completely self-sustaining and functional, and is able to function on its own without depending on other existing databases. When an application requests to access a virtual database, the system figures out which of the databases contain the data being requested by the user and passes on the request to that database. The most important and challenging part of building a virtual database is building a universal data model, which serves as the map or guide to every source of data within the company.
- virtual table
A virtual table is an object in metadata that holds information about the remote object: usually a table, view or a tabular result of a query to source system.