Data Abstraction Layer

Use Data Abstraction to Hide Complexity and Simplify Information Access

IT complexity reins as organizations struggle with data spread across various technology and application silos. And big data and the cloud are only making this worse.

Each data silo has its own access mechanisms, syntax, security, etc., and few are structured properly for business users.

Is there a way to simplify information access in a complex landscape?

Data abstraction bridges the gap between business needs and source data’s original form. This best practice implementation of data virtualization provides the following benefits:

  • Simplify information access – Bridge business and IT terminology and technology so both can succeed.
  • Common business view of the data – Gain agility, efficiency and reuse across applications via an enterprise information model or “Canonical” model.
  • More accurate data – Consistently apply data quality and validation rules across all data sources.
  • More secure data – Consistently apply data security rules across all data sources and consumers via a unified security framework.
  • End-to-end control – Use a data virtualization platform to consistently manage data access and delivery across multiple sources and consumers.
  • Business and IT change insulation – Insulate consuming applications from changes in the source and vice versa. Business users and applications developers work with a more stable view of the data. IT can make ongoing changes and relocation of physical data sources without impacting information users.

Use Data Abstraction to Hide Complexity and Simplify Information Access

IT complexity reins as organizations struggle with data spread across various technology and application silos. And big data and the cloud are only making this worse.

Each data silo has its own access mechanisms, syntax, security, etc., and few are structured properly for business users.

Is there a way to simplify information access in a complex landscape?

Data abstraction bridges the gap between business needs and source data’s original form. This best practice implementation of data virtualization provides the following benefits:

  • Simplify information access – Bridge business and IT terminology and technology so both can succeed.
  • Common business view of the data – Gain agility, efficiency and reuse across applications via an enterprise information model or “Canonical” model.
  • More accurate data – Consistently apply data quality and validation rules across all data sources.
  • More secure data – Consistently apply data security rules across all data sources and consumers via a unified security framework.
  • End-to-end control – Use a data virtualization platform to consistently manage data access and delivery across multiple sources and consumers.
  • Business and IT change insulation – Insulate consuming applications from changes in the source and vice versa. Business users and applications developers work with a more stable view of the data. IT can make ongoing changes and relocation of physical data sources without impacting information users.

TIBCO's Data Abstraction Reference Architecture

Data abstraction using TIBCO Data Virtualization

  • Application Layer – The “Application Layer” serves to map the Business Layer into the format which each Data Consumer (user or application) wants to consume the data.  It might mean formatting into XML for Web services or creating views with different alias names that match the way the consumers are used to seeing their data.
  • Business Layer – The “Business Layer” is predicated on the idea that the business has a standard or canonical way to describing key business entities such as customers and products.  In the financial industry, one often accesses information according to financial instruments and issuers amongst many other entities. Typically, a data modeler would work with business experts and data providers to define a set of “logical” or “canonical” views that represent these business entities. These views are reusable components that can and should be used multiple consumers via the Application Layer.
  • Physical Layer – In the “Physical Layer,” data Sources are integrated into the abstraction.  Value added tasks such as name aliasing, value formatting, data type casting, derived columns and light data quality checks are also defined here. Metadata used here is typically derived from the physical sources.