A smart data warehouse anyone can use

Be self-reliant

Easily upload data yourself, no need for savvy tech skills, ETL or engineering resources - you don’t even need to know SQL to get started, in fact almost 75% of our clients are non-technical. Panoply does all the heavy lifting for you thanks to our data warehouse automation engine.

“A great way to manage your data.”

- Natasha S

All your data together at last

Panoply automates the ingestion of any diverse data sources and makes tables clear, configurable, and immediately queryable. It also seamlessly connects you to any BI tool you need so you can start visualizing, analyzing, and sharing data insights in just minutes.

“As the lead analyst on a small startup team where data is imperative to our success, Panoply has been incredible."

- Justin M.

No more hanging queries

We’ve accelerated and optimized the querying process with machine learning. Panoply saves you precious time and resources, and makes sure your data is up-to-date and ready to be shared.

“Easy, fast and reliable”

- Vitali M.

All your data sources aggregated, easily queryable, & connected to any BI tool in just minutes

Data upload in minutes

Data analytics at the speed of your business

Features Include:

You won’t find an easier, more useful data warehouse dashboard than ours

Panoply’s dashboard gives you full transparency into your data pipeline. You can monitor all your data sources, saved queries, and connected BI tools in one place and easily schedule data uploads, right from the dashboard, to make sure you’re never using stale data.

Get tables that are clean, clear and easy to query

Panoply automates the modeling schema so you don’t have to spend endless hours on reindexing the data.

Instantly upload data from any cloud source, database or file

Whether your data is structured or not - including file types CSV, TSV, XML, JSON, and regardless of the data cloud service APIs or marketing tools you use - such as Google Analytics, you can pull in all your data into one streamlined, smart data warehouse.

Panoply connects your data to any BI tool

Seamlessly connect to any business intelligence tool you need to help you visualize or analyze your data in just minutes, so you can right away export and share valuable insights across your organization.

Learn more about Panoply’s integrations

Panoply runs on SQL

It’s easy and simple to transform data using simple SQL in Panoply’s built-in SQL workbench for queries to get your analytics done quickly and efficiently. Don’t know SQL? No problem, you can just connect your data to a visualization BI tool.

Optimize your queries

Panoply learns as you use it - saving and caching your queries, and optimizes them to save you time across all your data analytics reporting tasks. Similarly, it adapts server configurations to accommodate greater scalability to solve the issue of concurrency, which means your queries will get results fast even if others are running queries at the same time.

Data engineering in a box

Panoply is like having a personal data engineer and data base administrator on hand 24/7. It does the heavy lifting for you by automatically backing-up and sorting all your uploaded data neatly and clearly into tables and node clusters so you and your data engineering team don’t spend hours on data transformation, re-indexing, and schema modeling.

All the help you'll ever need

Panoply offers FAQs, how-to documentation, in-product chat, a friendly community of Panoply users, and even our very own Data Architects that you can use should you need any help.

Secure & Reliable

Secure, stable, and reliable infrastructure

As with any AWS cloud-hosted solution, responsibility for security is shared between Panoply and AWS.

Panoply uses Amazon cloud’s flexible and secure cloud infrastructure to store data logically across multiple AWS cloud regions and availability zones.
Panoply is SOC 2 certified, in addition, it rests on Amazon’s AWS, which complies with standards such as: HIPPA, PCI-DSSL SOC 2, SOC 3, FISMA, and FedRamp.
Every process runs in its own isolated, secure container.

Layered security and anomaly protocols

We take multiple precautions to ensure that all data is safe and secure.

Panoply can offer 2-step verification.
Panoply’s permissions enable customers to restrict access to specific tables or views for hierarchical security protocol.
We use a network that is segmented using AWS security groups, VPCs, ACLs, and additional custom measures.

Audits

Security measures are taken regularly and frequently to make sure Panoply is using the latest and greatest in security protocols.

Panoply’s security and R&D teams meet on a monthly basis to review risk assessments, system security configurations, and policies, as well as conduct internal security audits.
Panoply conducts annual gray-box penetration tests to all of its code.
Our network security are monitored, logged, and analyzed 24/7.

Panoply under-the-hood

Smart cloud data warehouse automation manages the data pipeline to save you time and resources.

Automated ETL

Our smart solution, cloud data warehouse automation, is an ETL-less process with natural language processing that uses automated “ELT” (extract - load - transform) to save you endless hours of coding and modeling for data ingestion, integration, and transformation.

Learn more about ELT

Data integration

Panoply can ingest data from over 100 data integrations, including databases, APIs, file systems and Panoply's SDK for pushing data from any current and future data source into Redshift, which is all done through Panoply.

  • Automates data source connections
  • Seamlessly connects to third party SaaS API's
  • Easily connects to the most common storage services
  • Build your own data source connection with Panoply’s SDK

Data schema modeling

Adaptive schema changes at real time along with the data. You don't need any prior knowledge and changes are seamless. Just load data in, everything else is automatic.

  • Data types are automatically discovered and a schema is generated based on the initial data structure
  • Likely relationships between tables are automatically detected and used to model a relational schema
  • Slowly-changing tables are automatically generated
  • Aggregations are automatically generated
  • Table history feature allows you store data uploaded from API data sources so you can compare and analyze data from different time periods

Automated data transformation

Panoply uses common transformations automatically, including the identification of structures & semi-structured data formats like CSV, TSV, JSON, XML, and many log formats – and immediately flattens nested structures like lists and objects. Structured data can also be transformed into different tables with a one-to-many relationship.

  • Common data formats are identified automatically, and parsed accordingly
  • Compressed files are discovered and extracted
  • Nested structures can be either flattened or placed into a sub-table
  • Enhancement modules are automatically applied to the data

Query Performance Optimization

Panoply automatically reindexes the schema and performs a series of optimizations on the queries and data structure to improve runtime based on your usage.

Remodelling through continuous optimization

Panoply offers several tools for automated maintenance of your analytical infrastructure, but also provides transparency and full control over all processes, enabling you to apply changes manually when needed.

  • Reindexing happens automatically whenever the system detects changes in query patterns
  • Panoply automatically identifies columns used for joins, and re-distributes the data across nodes to improve data locality and join performance

View materialization and query caching

Panoply utilizes statistical algorithms to inspect query and dashboard runtime over selected data to constantly look for ways to optimize query performance. For example, popular queries and views are automatically cached and materialized.

Concurrency

Multi-cluster replication allows the compartmentalization of storage and compute. The number of available clusters scales with the number of users and the intensity of the workload, supporting hundreds of parallel queries that are load balanced between clusters.

Connect to BI tools

Panoply exposes a standard JDBC/ODBC endpoint with ANSI-SQL support to allow instant, seamless connection to any visualization or business intelligence tool, such as Chartio, Looker, and Tableau.

Storage optimization

Panoply is constantly running periodic processes to mark the data and optimize the storage based on your usage. Full and incremental backups are automated so you don't have to schedule them.

Scaling up and down the cluster

Panoply automatically scales up and down based on the data volume. Scaling happens automatically behind the scenes, keeping your clusters available for both reads and write, and thus ingestion can continue uninterrupted. When the scaling is complete, the old and new clusters are swapped instantly.

Automated maintenance

Panoply automates the vacuuming and compressing of tables to help improve Redshift database performance, continuously analyzes tables to better utilize the queries its receiving, and give you fresh metadata.

Check out Panoply’s transparent pricing for any type of business.

Need more info? Speak with an expert Data Architect by setting up a personalized demo.