Fast and Easy Inline Data Wrangling
Inline data wrangling lets business users make adjustments: Mashup columns and rows from various data sources; unpivot with one click; change data type, category, and column name; dynamically group columns from visualizations; modify sort order; split smart columns; and cleanse data by replacing wrong or missing values. Full API support lets you add functions, like adding or changing join types to spark deeper insights.
Self-documenting Data Canvas
As you wrangle data, Spotfire automagically builds a data pipeline on the source view data canvas that documents all the changes made. Traceability and auditability of the data model are ensured with information about the data sources, connections, operations, and transformations automatically recorded. And the data model can be exported for reuse.
Turbocharged Recommendations Engine
Automatic recommendations for data structure and relationships in-memory and in-database save time. For example, recommendations might suggest adding table rows even before data is loaded; linking tables to configure instant shared marking; data loading options that make it easy to choose between in-database and in-memory tables; and categorization of in-database columns to make navigating big data a breeze.
Broad, Automated Data Access
With native connectors, easily connect to and blend data from relational and NoSQL databases; OLAP; Apache Drill, Hadoop, and Spark SQL; Impala; SAP HANA; and many others—and to cloud applications like Amazon Redshift, Databricks, RDS, Microsoft Azure SQL Database, Google Analytics, and Salesforce. Or easily build custom connectors. Natural Language Query lets you search for any data or connector.
Automation & Scalability
Spotfire Automation Services automates jobs like data loading, wrangling, transformation, and data and PDF export. Extensive APIs can trigger automations based on events. Spotfire Scheduled Updates lets you preload analyses into memory so teams can access them fast. Set uploads, especially large files, during off hours for efficient use of resources. Files load using smart memory sharing, serving tens of thousands of users with only a fraction of the memory otherwise required.