Panoply.io Press Kit
From Raw Data to Analysis in Under 10 Minutes
Press Contact: Laura Beck at email@example.com
At Panoply.io, our mission is bringing time to value as close as possible to none.We believe in taking the load off the IT and data engineers that have long been mired in time-intensive tasks such as schema building, complex modeling, and performance tuning.
Panoply.io utilizes machine learning and natural language processing (NLP) to automate standard data management activities – saving thousands of code lines, and countless hours of debugging and research.
With Panoply.io, what once required teams of engineers can now be accomplished with a click. Complex tasks, from data mining to security and backup, are all automated, cutting down development time by as much as 80% – and freeing up data scientists to explore data and drive business value.
Our easy-to-use platform gives small and medium businesses the tools to harness Big Data and get analytics quickly, so they can make faster and better business decisions.
Our story begins with an idea: In the Big Data era, free up your data engineers and scientists, and you create value for your customers and your business. It’s simple, right? We believe in taking the load off the IT and data engineers that have long been mired in time-intensive tasks like schema building, data mining, complex modelling, performance tuning… Our easy-to-use platform gives small and medium businesses the tools to harness Big Data and get analytics quickly, so they can make faster and better business decisions.
Assets for download:
Panoply.io’s CEO and co-founder, Yaniv Leven, brings to the table a strong background in statistical analysis and data modeling and an unwavering commitment to cold logic and entrepreneurship. Prior to establishing Panoply.io, Yaniv was responsible for analytics and data management technology first as COO at Win and then as CFO at Mytopia. With his B.Sc. in statistics and M.Sc. in finance from Tel Aviv University, Yaniv is dedicated to solving labor-and-time-intensive issues for engineers – and, above all things, he views himself as an analyst.
Panoply.io’s CTO and co-founder, Roi Avinoam, is an experienced software technologist who is passionate about all things analytics. Prior to establishing Panoply.io, Roi held the position of CTO at Win and Mytopia, where he managed systems and development teams processing over 200 million data points on a daily basis – always with the clear and focused goal of enhancing acquisition, retention, and monetization goals for multi-million dollar businesses. Prior to joining Mytopia, Roi worked for Metacafe (acquired by Collective Digital Services), where he led the semantic database and infrastructure team.
End-to-End Data Management for Analytics
Panoply.io’s ETL-less data integration means data is automatically aggregated as it streams in, allowing analysis in seconds – regardless of scale, and without data configuration, schema, or modeling.
The platform scans through data and discovers the underlying schema and metadata that best describe it – including all columns, data types, and foreign keys. It constructs this schema based on the data or alters the existing schema, eliminating the need to explicitly design database tables and columns.
Real Time Transformations
Panoply.io uses common transformations automatically, including the identification of data formats like CSV, TSV, JSON, XML, and many log formats – and flattens nested structures like lists and objects into different tables with a one-to-many relationship.
3-Tier Storage Architecture
Panoply.io has a 3-tier stack of storage systems abstracted away behind a single JDBC end point: AWS S3 at the backend – a massively scalable storage engine for semi-structured data; Redshift for most of the data, especially for structured and frequently accessed tables and rows; and Elasticsearch for fast access and searches through data and aggregations – including handling indexing and storage of common queries.
Streamlined Data Utilization
Panoply.io delivers a set of pre-integrated, cloud-based analysis tools through a Data Apps framework, which is easily extendable to a wide range of tools and platforms.
Simplified User Management
Panoply.io streamlines the management of users and permissions, avoiding the cumbersome SQL configuration generally required to manage users, passwords, grants, and denies.
Enhanced Privacy & Security
Built on top of AWS, Panoply.io uses the latest security patches and encryption capabilities provided by the underlying platform including permission controls, TLS, and hardware accelerated RSA encryption. Panoply.io also offers an extra layer of security built to enhance data protection and privacy that includes columnar encryption, two-step verification, anomaly detection, and handling expiring accounts.
Use-Case Optimization: Analyzes queries and data – identifying the best configuration for each use case, adjusting it over time, and building indexes, sortkeys, diskeys, data types, vacuuming, and partitioning.
Query Optimization: Identifies queries that do not follow best practices – such as those that include nested loops or implicit casting – and rewrites them to an equivalent query requiring a fraction of the runtime or resources.
Server Optimization: Optimizes server configurations over time based on query patterns and by learning which server setup works best. The platform switches server types seamlessly and measures the resulting performance.
Updates, Upserts, and Deletions: Supports standard SQL update and upset operations out of the box – without worrying about vacuuming or rebuilding – unlike many analytical databases.
Semi-Structured Data Parsing: Supports semi-structured text values like nested JSON, user-agent strings, some standard log formats, CSV, and serialized Ruby objects, parsing these objects and normalizing them into a relational database design.
Nested Structures: Handles nested structures automatically, flattening them into several tables with a one-to-many relationship. The result is a ready-to-use relational database design for all current and future datasets.
Columnar Storage:Provides seamless data storage and management in a multitiered, columnar storage based on Amazon Redshift, Elasticsearch, and Hadoop or AWS S3.
Data Tracking and Alerts: Simplifies keeping track of vast amounts of data – by identifying patterns, providing notification of anomalies, and generating alerts when the results of arbitrary SQL queries exceed predefined thresholds.
Backup and Recovery: Automatically backs up changes to data to a redundant S3 storage, optionally saved in two different availability zones across continents – enabling full recovery to any point in time.