Given data’s high demand and complex landscape, data architecture has become increasingly important for organizations that are embarking on any data-driven project, especially embedded analytics. There are many ways to approach your analytics data architecture. You may skip some approaches altogether, or use two simultaneously.
To determine which data architecture solution is best for you, consider the pros and cons of these seven most common approaches:
The starting point for many application development teams is the ubiquitous transactional database, which runs most production systems. Transactional databases are row stores, with each record/row keeping relevant information together. They are known for very fast read/write updates and high data integrity. As soon as analytics data hits the transactional database, it is available for analytics. The main downside of transactional databases is structure: They’re not designed for optimal analytics queries, which creates a multitude of performance issues.
Bottom Line: Using transactional databases for embedded analytics makes sense if you already have them in place, but you will eventually run into limitations and need workarounds.
Views or Stored Procedures
Typically, when developers start noticing problems with their transactional systems, they may opt to create some views or stored procedures. Views create the appearance of a table as a result set of a stored query. While views only showcase the data, stored procedures allow you to execute SQL statements on the data. This approach simplifies the SQL needed to run analytics and allows users to filter the information they want to see. However, views or stored procedures typically make performance worse.
Bottom Line: When it comes to embedded analytics, views or stored procedures risk creating lags and affecting your application’s response time.
Aggregate Tables or Material Views
Application development teams may opt to create aggregate tables or material views as another workaround to using view or stored procedures. With an aggregate table, you can create a summary table of the data you need by running a “Group By” SQL query. In a materialized view, you can store query results in a table or database. Aggregate tables or material views improve query performance because you don’t need to aggregate the data for every query. But, the downside is that you need to figure out when and how to update the tables, as well as how to distinguish between updates versus new transactions.
Bottom Line: Pre-aggregated tables and materialized views will help with performance, but you do need to stay organized and put strict processes in place to keep the aggregates up to date.
Replication of Transactional Databases
Replication offloads analytics queries from the production database to a replicated copy of the database. It requires copying and storing data in more than one site or node, so all of the analytics users share the same information. Because many databases have built-in replication facilities, this is easier to implement than other analytics data architecture approaches—and replication removes analytical load from the production database. However, the main issue with replication is the lag between a new transaction hitting the database and that data being available in the replicated table.
Bottom Line: Replicating the production database also means replicating the complexity of queries in your embedded analytics solution.
With caching, you can preprocess complex and slow-running queries so the resulting data is easier to access when the user requests the information. The cached location could be in memory, another table in the database, or a file-based system where the resulting data is stored temporarily. Caching can help with performance where queries are repeated and is relatively easy to set up in most environments. But, if you have multiple data sources, ensuring consistency and scheduling of cache refreshes can be complex.
Bottom Line: Caching can be a quick fix for improving embedded analytics performance, but the complexity of multiple sources and data latency issues may lead to limitations over time.
For a more sophisticated data architecture, application development teams may turn to data warehouses or marts. Data warehouses are central repositories of integrated data from one or more disparate sources, while data marts contain a subset of a data warehouse designed for a specific reason (e.g., isolating data related to a particular line of business). They both allow you to organize your data in a way that simplifis query complexity and significantly improves query performance. However, designing a data structure for particular use cases can be complex, especially if you’re not familiar with the schema and ETL tools involved.
Bottom Line: Data warehouses and data marts are designed for faster analytics and response times, but implementation will take more time and be more complex.
Modern Analytics Databases
Modern analytics databases are typically columnar structures or in-memory structures. In columnar structures, data is stored at a granular column level in the form of many files, making it faster to query. For in-memory structures, the data is loaded into the memory, which makes reading/writing dramatically faster than a disk-based structure. Modern analytics databases provide improved performance on data load as well as optimal query performance, which is important if you have large volumes of data. But, a big downside is the significant learning curve associated with switching to a modern analytics database. Also, unlike transactional databases, analytics databases perform updates and deletions poorly.
Bottom Line: The modern analytics database is optimal for faster queries and dealing with large volumes of data, but it requires specialized skills and can be costly to implement.
Get a more detailed look at these approaches in in our whitepaper: Toward a Modern Data Architecture for Embedded Analytics >