Run Etleap Inside Your VPC

Today is a big day for the whole team here at Etleap! We’re announcing that we’re making Etleap VPC available on AWS Marketplace, which means customers can run Etleap inside their own AWS Virtual Private Cloud (VPC) with just a few clicks. This marks the beginning of an era where we’re not just responsible for the success of customers running pipelines on our hosted infrastructure, but also of those running Etleap pipelines on their own AWS infrastructure. This product launch has been in the making for many months, and I am very proud of what our team has accomplished!

So what is Etleap VPC and why did we build it? First I’ll make the case for the VPC SaaS model being the future of enterprise ETL. Then I’ll highlight some of the reasons why being hosted SaaS first has been tremendously beneficial to Etleap’s development as a company.

How did we get here?

Every organization that has a data warehouse or lake has to deal with ETL – after all, what good is a fancy data repository without up-to-date and clean data in it? Traditionally, ETL has been expensive and time-consuming. Learning complex ETL software, building Kimball modeling processes, and setting up compute clusters dedicated to ETL led to projects measured in months or years with 7-figure budgets. And that’s before the operational costs of handling changes and errors.

Enter the cloud. Data warehouses like Amazon Redshift and Snowflake have taken advantage of cloud computing primitives to deliver vastly superior user experiences, offering scalability and flexibility at a fraction of the cost of traditional data warehouses. Today, cloud-native ETL products tailored to these technologies and to modern data teams’ workflows are starting to eliminate the headaches of traditional ETL projects.

Advantages of Cloud-Native, Managed SaaS ETL

What makes cloud-native ETL different? Some products, like Etleap’s ETL solution, are easy to  learn and let data engineers create ETL pipelines that are fully-managed. This means that customers don’t need a dedicated ops team to operate the hardware that the ETL software runs on, or to manage pipelines and fixing errors. Pipelines can be set up quickly from any data source and transformations to make the data useful can be defined without coding.

This is great news for data teams, because they can avoid hiring engineers dedicated to ETL, and they can get up and running in days or weeks instead of months or years. Teams often see an order-of-magnitude gain in data team productivity as a result. At the same time, it means that a tremendous amount of trust is placed in the product’s data security and operation by the customer.

Towards the Virtual Private Cloud (VPC)

A question we have been asked many times over the years by our customers is whether we can operate inside their own Virtual Private Cloud (VPC). The security benefit to the customer is that data doesn’t flow outside their VPC on the way from their source to the warehouse or lake, and they have more direct control over infrastructure and data access policies. While hosted Etleap uses only S3 buckets owned by the customer for intermediate data storage, data passes through servers managed by Etleap for processing. For the most privacy-sensitive companies, this is a non-starter. In order to adopt the cloud, it is an absolute requirement that their data remains tightly controlled inside their VPC.

While a large segment of the market is happy to adopt, and even prefers, hosted ETL services, it is now clear to us that enterprise companies are often not, and probably never will be. We asked ourselves what this meant for our offering, and the answer we have come up with is that running Etleap inside one’s own VPC should “feel” just like using the hosted product; hiring ETL engineers should not be necessary and pipelines should “just work”. This is of course easier said than done – without our experienced ops team having direct access to operate infrastructure and troubleshoot pipeline issues, how can we offer the same experience?

Through working with early adopters of our VPC offering we have hardened the infrastructure components in order to become virtually management-free, and honed in on the required operational metrics and de-identified logs shipped back to Etleap’s ops team in order to enable our team to offer proactive pipeline support and assist with reactive troubleshooting.

Setting up Etleap is designed to be fast and pain-free, and it was important to us that we offer the same effortless experience for customers that want to run Etleap inside their own VPC. Using AWS CloudFormation the setup of infrastructure components is fully automated, including an automatically scaling AWS EMR cluster that runs extractions and transformations. Customers enter their name and email address, press “play” on the CloudFormation template, and minutes later they can set up Etleap pipelines in their browser.

Benefits of Being SaaS-First

Creating a robust ETL system is all about handling edge cases. It’s relatively straight-forward to build a system to ingest files from an SFTP server or replication logs from a SQL database into a warehouse. The complexity is in how you handle issues like errors in the data, schemas that change in unpredictable ways, or ingesting incrementally from an S3 bucket that contains tens of millions of non-alphabetically ordered files. Our focus is on providing intuitive solutions to all these issues, while at the same time making the ETL software a pleasure to use.

Etleap’s hosted multi-tenant deployment is and will continue to be the biggest Etleap deployment in terms of data scale and number of users. This is a great benefit to customers running Etleap inside their own VPCs because of the continuous improvements we make to the scalability, usability, and flexibility of the solution.

Being SaaS-first has enabled us to work very closely with our customers. From the beginning, we set the high bar that pipelines should “just work”, no matter the source, quantity of data, or transformation complexity. Over the last 7 years, we have been fortunate to experience our customer’s new challenges first-hand every day, and as a result have been able to build a robust system that meets their needs. Today, we are thrilled to provide the same pain-free and effortless experience for customers that want to run Etleap inside their own VPC. 

SPEEDING UP ETLEAP MODELS AT AXS WITH AMAZON REDSHIFT MATERIALIZED VIEWS

This blog post was written in partnership with the Amazon Redshift team, and also posted on the AWS Big Data Blog.

The materialized views feature in Amazon Redshift is now generally available and has been benefiting customers and partners in preview since December 2019. One customer, AXS, is a leading ticketing, data, and marketing solutions provider for live entertainment venues in the US, UK, Europe, and Japan. Etleap, an Amazon Redshift partner, is an extract, transform, load, and transform (ETLT) service built for AWS. AXS uses Etleap to ingest data into Amazon Redshift from a variety of sources, including file servers, Amazon S3, relational databases, and applications. These ingestion pipelines parse, structure, and load data into Amazon Redshift tables with appropriate column types and sort and distribution keys.

Improving dashboard performance with Etleap models

To analyze data, AXS typically runs queries against large tables that originate from multiple sources. One of the ways that AXS uses Amazon Redshift is to power interactive dashboards. To achieve fast dashboard load times, AXS pre-computes partial answers to the queries dashboards use. These partial answers are orders of magnitude smaller in terms of the number of rows than the tables on which they are based. Dashboards can load much faster than they would if they were querying the base tables directly by querying Amazon Redshift tables that hold the pre-computed partial answers.

Etleap supports creating and managing such pre-computations through a feature called models. A model consists of a SELECT query and triggers for when it should be updated. An example of a trigger is a change to a base table, that is, a table the SELECT statement uses that defines the model. This way, the model can remain consistent with its base tables.

The following screenshot shows an Etleap model with two base table dependencies.

Etleap represents their models as tables in Amazon Redshift. To create the model table, Etleap wraps the SELECT statement in a CREATE TABLE AS (CTAS) query. When an update is triggered, for example, due to base table inserts, updates, or deletes, Etleap recomputes the model table through the following code:

CREATE TABLE model_temporary AS SELECT …
DROP TABLE model;
RENAME TABLE model_temporary TO model;

Analyzing CTAS performance as data grows

AXS manages a large number of Etleap models. For one particular model, the CTAS query takes over 6 minutes, on average. This query performs an aggregation on a join of three different tables, including an event table that is constantly ingesting new data and contains over a billion rows. The following graph shows that the CTAS query time increases as the event table increases in number of rows.

There are two key problems with the query taking longer:

  • There’s a longer delay before the updated model is available to analysts
  • The model update consumes more Amazon Redshift cluster resources

To address this, AXS would have to resort to workarounds that are either inconvenient or costly, such as archiving older data from the event table or expanding the Amazon Redshift cluster to increase available resources.

Comparing CTAS to materialized views

Etleap decided to run an experiment to verify that Amazon Redshift’s materialized views feature is an improvement over the CTAS approach for this AXS model. First, they built the materialized view by wrapping the SELECT statement in a CREATE MATERIALIZED VIEW AS query. For updates, instead of recreating the materialized view every time that data in a base table changes, a REFRESH MATERIALIZED VIEW query is sufficient. The expectation was that using materialized views would be significantly faster than the CTAS-based procedure. The following graph compares query times of CTAS to materialized view refresh.

Running REFRESH MATERIALIZED VIEW was 7.9 times faster than the CTAS approach—it took 49 seconds instead of 371 seconds on average at the current scale. Additionally, the update time was roughly proportional to the number of rows that were added to the base table since the last update, rather than the total size of the base table. In this use case, this number is 3.8 million, which corresponds to the approximate number of events ingested per day.

This is great news. The solution solves the previous problems because the delay the model update caused stays constant as new data comes in, and so do the resources that Amazon Redshift consume (assuming the growth of the base table is constant). In other words, using materialized views eliminates the need for workarounds, such as archiving or cluster expansion, as the dataset grows. It also simplifies the refresh procedure for model updates by reducing the number of SQL statements from three (CREATE, DROP, and RENAME) to one (REFRESH).

Achieving fast refresh performance with materialized views

Amazon Redshift can refresh a materialized view efficiently and incrementally. It keeps track of the last transaction in the base tables up to which the materialized view was previously refreshed. During subsequent refreshes, Amazon Redshift processes only the newly inserted, updated, or deleted tuples in the base tables, referred to as a delta, to bring the materialized view up-to-date with its base tables. In other words, Amazon Redshift can incrementally maintain the materialized view by reading only base table deltas, which leads to faster refresh times.

For AXS, Amazon Redshift analyzed their materialized view definitions, which join multiple tables, filters, and aggregates, to figure out how to incrementally maintain their specific materialized view. Each time AXS refreshes the materialized view, Amazon Redshift quickly determines if a refresh is needed, and if so, incrementally maintains the materialized view. As records are ingested into the base table, the materialized view refresh times shown are much faster and grow very slowly because each refresh reads a delta that is small and roughly the same size as the other deltas. In comparison, the refresh times using CTAS are much slower because each refresh reads all the base tables. Moreover, the refresh times using CTAS grow much faster because the amount of data that each refresh reads grows with the ingest rate.

You are in full control of when to refresh your materialized views. For example, AXS refreshes their materialized views based on triggers defined in Etleap. As a result, transactions that are run on base tables do not incur additional cost to maintain dependent materialized views. Decoupling the base tables’ updates from the materialized view’s refresh gives AXS an easy way to insulate their dashboard users and offers them a well-defined snapshot to query, while ingesting new data into base tables. When AXS vets the next batch of base table data via their ETL pipelines, they can refresh their materialized views to offer the next snapshot of dashboard results.

In addition to efficiently maintaining their materialized views, AXS also benefits from the simplicity of Amazon Redshift storing each materialized view as a plain table. Queries on the materialized view perform with the same world-class speed that Amazon Redshift runs any query. You can organize a materialized view like other tables, which means that you can exploit distribution key and sort columns to further improve query performance. Finally, when you need to process many queries at peak times, Amazon Redshift’s concurrency scaling kicks in automatically to elastically scale query processing capacity.

Conclusion

Now that the materialized views feature is generally available, Etleap gives you the option of using materialized views rather than tables when creating models. You can use models more actively as part of your ETLT strategies, and also choose more frequent update schedules for your models, due to the performance benefits of incremental refreshes

For more information about Amazon Redshift materialized views, see Materialize your Amazon Redshift Views to Speed Up Query Execution and Creating Materialized Views in Amazon Redshift.

by Christian Romming, Prasad Varakur (AWS), and Vuk Ercegovac (AWS)

How Etleap automates its infrastructure process with Terraform & Ansible

Introduction

“Infrastructure as Code”, IaC, is a term every system administrator has heard by now. We can think about it as the process of managing and provisioning IT infrastructure through source-code instead of performing tasks manually. As we will explore, this helps DevOps teams efficiently and safely adapt infrastructure to meet the always-changing requirements dictated by the business. This approach helps to manage infrastructure in a way that enables the devops team to better serve the organization.

How can this paradigm help you? It encourages the adoption of software development practices like keeping infrastructure’s definition and configuration scripts in a source control system, automated code testing, and doing peer reviews. This benefits infrastructure management in numerous tried-and-true ways.

If you’re starting your journey into IaC, there are many resources you can reference to familiarize yourself with the concepts and terminology associated with this approach. Kief Morris’ “Infrastructure as Code: Managing Servers in the Cloud” is an essential book on the topic (alternatively, Martin Fowler’s blog gives a great overview).

At Etleap, we embrace IaC to build and improve our service every day. This practice helps us in our ongoing effort to make Etleap the best ETL platform it can be.

“IaC is makes it possible to effortlessly and reliably spin up any element of an infrastructure at any time, or even the entire infrastructure, in a matter of minutes.”

Let’s take a look at a few examples of how using IaC has helped Etleap build a better product.

Service uptime and disaster recovery

One advantage of IaC is that it makes it possible to effortlessly and reliably spin up any element of an infrastructure at any time, or even the entire infrastructure, in a matter of minutes. The new infrastructure will be consistent with the previous one, which is to say that its software and configuration are the same (every security patch is applied, OS is configured the same way, allocated resources are identical).

Imagine a scenario where extreme weather or a natural disaster destroys the data centers where Etleap is hosted. For obvious reasons, it’s vital that we have a plan to recover from such an ordeal. Using IaC, we’re able to easily and reliably reproduce the entire infrastructure needed by Etleap and get it running in a new data center in short order. And so, even in this extreme case we’re able to recover from a service disruption incredibly quickly.

“With IaC tools available, almost every aspect of an infrastructure’s configuration can be defined in a configuration file or scripted.”

Another common issue is configuration drift, which is a major concern for services that must ensure high availability and disaster recovery strategies. If left unchecked, configuration drift increases the risk of prolonged outages or loss of data. By making sure every change introduced to the infrastructure configuration is done through the definition files or scripts, we can totally eliminate configuration drift. This way, we reduce the risk of having misconfiguration issues when we need to re-provision our infrastructure.

Finally, to keep Etleap up and running at all times, we should be able to add more resources or replace an unhealthy component at any time. Let’s imagine that a server instance stops serving requests because it’s running out of memory. In this case we should be able to provision a new server, with more memory, and redirect the traffic to it. Etleap has dealt with a similar challenge where we encountered memory shortages when running an Amazon Elastic MapReduce cluster. After EMR had become unhealthy, we traced the root cause to memory degradation. But because the EMR cluster provisioning and configuration was scripted, it was straightforward to update the configuration and start a new cluster and point Etleap to it after it launched, with zero downtime for our users.

Improved monitoring, highly secure

With IaC tools available, almost every aspect of an infrastructure’s configuration can be defined in a configuration file or scripted. Not only physical hardware, networks, and storage, but also identity access management (IAM), monitoring, alarm systems, and much more.

Going back to our example of a server running out of memory: when things go sideways it’s essential to have a monitoring system that alerts us of these issues to avoid service outages. If we know a certain node is going into a bad state, we can take the needed action to improve its behavior or, in the worst case, replace the node outright. This way, we’re usually able to resolve the issue, before our customers notice any issues or downtime. It also makes a lot of sense having the definition of these alarms tied to the infrastructure they’re monitoring — any time infrastructure changes, its monitoring is updated as well.

IAM is hugely important when it comes to security. Meticulously defining the right access levels and ingress rules to different parts of the infrastructure is crucial for data and system protection. By restricting access to production servers we can prevent unauthorized persons from gaining access to sensitive data. Finally, audits and reviews of the configuration and any changes allow us to maintain the right access at all times. 

Etleap productization

At Etleap, IaC practices enable a repeatable deployment process. Each time we provision our infrastructure the result is a known quantity, and that’s something we take advantage of in multiple ways.

Etleap is SaaS, meaning our product runs in the cloud and our users don’t need to install or maintain anything to start using it. However, some of our customers, especially those with strict security requirements, require that Etleap runs in an isolated AWS VPC. Embracing IaC helps us efficiently deploy Etleap to a completely new environment. The installation process is well-defined and tested, and is a daily occurrence for us. This allows us to ensure that Etleap running in one environment will behave identically to another instance running in a different environment, which saves time when identifying issues and reduces the need for customers to contact the support team. Thinking of infrastructure as a product itself gives Etleap a competitive advantage, as it allows us to serve customers with complex security requirements.

“IaC not only helps manage production environments but the entire software development lifecycle.”

Running identical instances of Etleap in multiple environments also simplifies updates. For example, diagnosing and fixing a bug for a user running Etleap in his or her own VPC would be really challenging if each of the environments differed from one another. By ensuring parity between all environments where Etleap is deployed, we eliminate this potential headache.

Streamline development and delivery cycle

IaC not only helps manage production environments but the entire software development lifecycle. During development, we can provision an isolated sandbox environment to safely make changes without the risk of breaking something. We can test new changes against our sandbox environment to more quickly detect if they would negatively affect the production environment when deployed. Having each new feature or bug fix properly tested during development reduces the risk of introducing issues when changes are rolled out. Once thoroughly tested, changes are then automatically deployed in a CI/CD process, any new feature or bug fix is rolled out to our users as soon as they’re merged into the master branch.

For example, some time ago I was tasked with improving our validation process for users wanting to add or edit an S3 data lake or S3 input connection. One of our goals was to give to the user more accurate information about misconfiguration problems with their connections. In both cases, most of these configuration issues were related to incorrect policies being attached to a given IAM user. It would have been quite tedious to add all these cases manually through the AWS console. Instead, we were able to quickly and easily script the policies that matched the cases we wanted to test and roll them out to the sandbox environment.

Another case where we took advantage of our ability to effortlessly provision a sandbox environment during development was when we improved our ZooKeeper cluster. We switched from having a standalone ZooKeeper node to an ensemble of nodes. We scripted the cluster configuration and provisioned it in a sandbox environment. This way, we could test that the cluster was working as expected. We were also able to stress test the cluster out to see how it behaved. There were some questions we wanted to answer before rolling it out, like: how well does the cluster behaves when nodes are disconnected? Are new nodes automatically incorporated into the cluster? Will the master node switch to another node when it becomes unhealthy? We tested each of these scenarios in the safety of our sandbox environment without affecting production. When we finally rolled the new ZooKeeper cluster out, we could rest easy that it would work as expected, as we’d already tested against many of the possible point of failures during development.

Conclusion

By leveraging IaC, Etleap benefits in numerous ways. Hosting the infrastructure design in definition files and scripts ensures a consistent environment, where each node has exactly the desired configuration. This makes it easier and less risky to update many aspects of the infrastructure. Errors can be identified and fixed faster, or in the worst case, infrastructure can be reverted to the last functional configuration. Changes can be made quickly and with little effort, and we can easily scale by increasing the number of nodes or their size.

AWS re:Invent 2019 Roundup

Materialized Views, Amazon Redshift Ready, and more!

Last week Etleap put on another exciting show at AWS re:Invent, where we announced some new features and integrations with AWS services, were interviewed by the tech experts over at “theCUBE,” hosted a session all about data lakes, and most importantly, spoke with countless attendees about ETL. Here’s a roundup of all the Etleap action you may have missed at AWS re:Invent 2019.


Etleap’s booth was a veritable oasis of ETL discussion and Etleap product demos
Amazon Redshift Launches materialized views with help from etleap

Among AWS’ numerous announcements at re:Invent this year was the availability of Materialized Views in preview on Amazon Redshift. The Materialized Views feature is designed to help customers achieve up to 100x faster query performance on analytical workloads such as dashboarding queries from Business Intelligence (BI) tools and ELT data processing. Etleap helped launch this feature by integrating it into a beta version of Etleap Models (let us know if you want to be included in the beta!) and showing that it can give an ~8x performance boost. The Redshift team showcased our results in their chalk talk on “Accelerating performance with Materialized Views.”


Yannis (seated, left) and Vuk (standing, right) from the Amazon Redshift team showcase Etleap at their Redshift Materialized Views Chalk Talk

“We are delighted to have Etleap help launch the Materialized Views feature in Amazon Redshift,” said Andi Gutmans, Vice President, Analytics, Amazon Web Services, Inc. “Amazon Redshift Materialized Views allow customers to realize a significant boost in query performance in ETL pipelines and BI dashboards. By integrating Etleap with this new functionality, customers can seamlessly get the benefits of Amazon Redshift Materialized Views without needing to make any application changes.”

You can read the full Etleap press release about Amazon Redshift Materialized Views here.

Etleap Founder makes the case for more analyst-friendly data lakes, alongside Redshift team

Many Etleap customers use our solution to build their S3/Glue data lakes, so data lakes are a topic we’ve learned a thing or two about over the years. For re:Invent this year, we thought we’d share our data lake expertise with the world by hosting a session alongside the Redshift team entitled “Five data lake considerations with Amazon Redshift, Amazon S3 & AWS Glue.”


Etleap founder and CEO, Christian Romming, led the session focused on data lakes

Have an interest in data lakes yourself? You can check out the session here.

Etleap featured on enterprise tech talk show

After our data lakes session, Founder and CEO of Etleap, Christian Romming, sat down with the hosts of “theCUBE,” re:Invent’s resident technologies interview show. Check it out:

Etleap founder sits down with David Vellante and John Walls of theCUBE
Etleap achieves Amazon Redshift Ready Designation

Distinguishing ourselves in the Amazon Redshift partner ecosystem, we announced that Etleap has achieved the designation of “Amazon Redshift Ready,” a recently announced status among partners who have proven integration with Amazon Redshift.

Etleap was featured in the keynote announcement among a select few debuting partners

“Etleap is proud to achieve Amazon Redshift Ready status,” said Christian Romming, Founder and CEO of Etleap. “Our team is dedicated to helping companies achieve maintenance-free, enterprise-grade ETL by leveraging the agility, breadth of services, and pace of innovation that AWS provides. Our status as an Amazon Redshift Ready partner shows our continued commitment to Amazon Redshift and the AWS ecosystem.”

You can read the full Etleap press release covering the Amazon Redshift Ready announcement here.


This concludes our roundup of the biggest Etleap new stories from AWS re:Invent 2019. Stay tuned for more Etleap trade show news, and for all things ETL you’re already in the right place.

On-Demand Webinar: Etleap presents “Customer First Technology”

In this webinar, we explore how and why eMoney puts their customers first by choosing technologies that solve customer challenges and their use case for running Etleap and Looker within a highly secure VPC environment.

Ready to try Etleap for yourself? Click here to get started!

Stay tuned to this blog for more webinars and other Etleap content, and for all things ETL you’re already in the right place.

Etleap Achieves Amazon Redshift Ready designation

Recently-announced designation distinguishes Etleap on the Redshift platform

SAN FRANCISCO, Calif. – December 4, 2019 — Etleap announced today that it has achieved the Amazon Redshift Ready designation. This designation recognizes that Etleap has demonstrated successful integration with Amazon Redshift. 

Achieving the Amazon Redshift Ready designation differentiates Etleap as an AWS Partner Network (APN) member with a product integrating with Amazon Redshift and is generally available and fully supported for AWS customers. AWS Service Ready Partners have demonstrated success building products integrated with AWS services, helping AWS customers evaluate and use their technology productively, at scale and varying levels of complexity. 

“Etleap is proud to achieve Amazon Redshift Ready status,” said Christian Romming, Founder and CEO of Etleap. “Our team is dedicated to helping companies achieve maintenance-free, enterprise-grade ETL by leveraging the agility, breadth of services, and pace of innovation that AWS provides. Our status as an Amazon Redshift Ready partner shows our continued commitment to Amazon Redshift and the AWS ecosystem.”

To support the seamless integration and deployment of these solutions, AWS established the AWS Service Ready Program to help customers identify products integrated with AWS services and spend less time evaluating new tools, and more time scaling their use of products that are integrated with AWS Services.

Etleap is analyst-friendly ETL-as-a-service for Amazon Redshift and Snowflake data warehouses and Amazon S3/AWS Glue data lakes. Etleap replaces time-consuming ETL setup and maintenance with intuitive software and a managed service that automates data pipelines and reduces time to value.

For more information, email info@etleap.com; Follow us on Twitter @etleap; or Like us on Facebook @etleap.


About Etleap: Etleap was founded by Christian Romming in 2013. Before founding Etleap, Romming was the CTO of an ad-tech company, where he recognized the available solutions for building data pipelines required monumental engineering resources to implement, maintain, and scale. Etleap is backed by world-class investment firms First Round Capital, SV Angel, BoxGroup, and Y Combinator. Our mission is to make data analytics teams more productive. Our ETL solution lets analysts build data warehouses without internal IT resources or knowledge of complex scripting languages. This reduces the time of typical ETL projects from weeks to hours, and takes out the pain of maintaining data pipelines over time.

Etleap announces support for Amazon Redshift Materialized Views

Etleap customers will benefit from new technology in Etleap for faster query performance

SAN FRANCISCO, Calif. – December 2, 2019 — Today, Etleap, an Advanced Technology Partner in the Amazon Web Services (AWS) Partner Network (APN) and provider of fully-managed Extract, Load, Transform (ETL)-as-a-service, announced support for Amazon Redshift Materialized Views. The new feature is designed to help customers achieve up to 100x faster query performance on analytical workloads such as dashboarding queries from Business Intelligence (BI) tools and ELT data processing. Because Etleap was built from the ground up to handle data integration for Amazon Redshift users, including orchestration of transformations within Amazon Redshift, the company is uniquely positioned to test this new capability and provide support for it in their product.

“We are delighted to have Etleap help launch the Materialized Views feature in Amazon Redshift,” said Andi Gutmans, Vice President, Analytics, Amazon Web Services, Inc. “Amazon Redshift Materialized Views allow customers to realize a significant boost in query performance in ETL pipelines and BI dashboards. By integrating Etleap with this new functionality, customers can seamlessly get the benefits of Amazon Redshift Materialized Views without needing to make any application changes.”

“For as long as Amazon Redshift has been around, Etleap has been making some of the most complex data pipelines easier and faster for AWS users, so working with the Amazon Redshift team to improve post-load transformations with Amazon Redshift Materialized Views was a perfect fit for us,” said Christian Romming, Founder and CEO of Etleap. “Etleap was designed for AWS and delivers analyst-friendly, enterprise-grade ETL-as-a-service. By collaborating with the Amazon Redshift team on this project, we continue to show our commitment to our customers and AWS, and have taken another major step in our quest to make data integration less of a headache without sacrificing control or visibility — and we couldn’t be more excited.”

Customers value Etleap’s modeling feature, because it allows them to gain advanced intelligence from their data. One challenge for customers is the time it takes to refresh a model when data changes. Amazon Redshift Materialized Views allows Etleap to refresh model tables faster and use fewer Amazon Redshift cluster resources in the process, which frees up more resources for other Amazon Redshift workloads. This allows a customer’s engineering and analyst teams to deliver on the desired outcome more efficiently.

For more information, email info@etleap.com; Follow us on Twitter @etleap; or Like us on Facebook @etleap.


About Etleap: Etleap was founded by Christian Romming in 2013. Before founding Etleap, Romming was the CTO of an ad-tech company, where he recognized the available solutions for building data pipelines required monumental engineering resources to implement, maintain, and scale. Etleap is backed by world-class investment firms First Round Capital, SV Angel, BoxGroup, and Y Combinator. Our mission is to make data analytics teams more productive. Our ETL solution lets analysts build data warehouses without internal IT resources or knowledge of complex scripting languages. This reduces the time of typical ETL projects from weeks to hours, and takes out the pain of maintaining data pipelines over time.

AXS knocks it out of the park with modern data analytics

Learn how Etleap and Looker helped AXS reduce manual ETL work and reporting, allowing them to focus on growth for themselves and their clients.

AXS is a ticketing company for live entertainment

AXS powers the ticket buying experience for over 350 world-wide partners

AXS is a leading ticketing, data, and marketing solutions provider in the US, UK and Europe. The company and its solutions empower more than 200 clients— teams, arenas, theaters, clubs and colleges— to turn data into action, maximize the value of all their events and create joy for fans. It is an enterprise event technology platform that services venues, promoters and sports teams; providing fans the opportunity to purchase tickets directly from their favorite venues via a user-friendly ticketing interface. While customers know them as a destination for tickets, clients recognize AXS for their data services, including transforming, reporting, analyzing, and more.

The data services offered by AXS have always been incredibly helpful for clients. But to make them so valuable, a significant amount of ETL work and reporting was required, which resulted in some challenges for the team. To learn more about those challenges, and how they eventually found a solution in both Etleap and Looker, we spoke with Ben Fischer, the Sr. Director of Business Intelligence and Strategy.

THE CHALLENGES

Ben oversees the Business Intelligence and Strategy team, which manages everything from integrations and building data models to powering the data warehouse and products across AXS. The team’s main objectives are to power AXS’s internal data services, while also delivering data services for clients.

Before Etleap and Looker, the Data Engineering team was spending more and more of their time working on internal and external requests for one-off ingestions and custom data sources. Each data source would take anywhere from half a day to weeks (or even months) to implement, which meant the team was spending most of their time on ETL work, and not enough time on making the data useful.

“With Etleap, we’re able to do the ETL end-to-end and get it directly into the hands of whoever’s trying to use it right away.”

– Ben Fischer, Sr. Director of Business Intelligence

Ben told us, “The whole team was just getting sucked into ETL work constantly, which was not the best use of their time. We wanted to be working on modeling and on the products.”

SEARCHING FOR A BETTER DATA SOLUTION

In order to find the right solutions to fit their needs, the AXS team compared several modern ETL solutions.

When comparing the options, they found that most of the tools were good solutions for getting ETL out of the engineers’ hands, allowing less technical people to consistently bring in the data, while also offering support and monitoring. However, Etleap stood out in two main areas: transformation and transparency.

For AXS, transformation was important because they wanted the ability to not only bring in a new data source, but automatically transform it into something useful for analysts. “Stitch and Fivetran are really focused on the “extract” and “load,” so they’ll bring data in from an outside source and put it into your data warehouse, but they don’t offer much in the way of transformation. You still have to transform it afterwards into something that’s usable, which means you’re still relying on engineering to access the data. With Etleap, we’re able to do the ETL end-to-end and get it directly into the hands of whoever’s trying to use it right away.” said Ben.

Etleap’s data wrangler makes parsing and structuring data take minutes instead of months.

Beyond the transformation aspect, the AXS team was also impressed by Etleap’s level of transparency around reliability. “A lot of the competitors emphasize this idea of 100% reliability. They would say that they would never miss any data, everything would come through perfectly, and you would never have any issues. But we knew that wasn’t the case. No tool is 100% perfect, and when talking with Etleap, they were much more open about what we could expect. They acknowledged that 100% reliability is the objective, but that it’s challenging to achieve in practice, so it’s something they’re continuously working to achieve. Some of the competitors wouldn’t even acknowledge that reliability could possibly be an issue, which makes you feel like they may not support you if anything goes wrong.” said Ben.

To top things off, Etleap was also very straightforward to use. It required very little training and offered reliable support, which meant AXS could get up and running immediately. When the team first started evaluating ETL solutions, they encountered complexity with managing the tools and building integrations. “But with Etleap, it’s pretty straightforward. There’s always somebody available if you need to reach out. That meant we could start using Etleap in just a matter of days, rather than undergoing weeks of training,” said Ben.

Once the team found an ETL solution, it was time to help out the analysts. They looked at a variety of products for business intelligence, and even tried a few different solutions, but Looker stood out in part because it could get report building out of the hands of the analysts. Ben told us, “With Etleap, it was about getting ETL out of engineering. With Looker, it was about getting report building out of analytics, so analysts can spend their time actually forming opinions, defining strategy, doing analyses, and digging into the data, rather than just building reports day in and day out.”

Consistency and confidence are critical to democratizing data, and Looker’s data modeling layer allows people across AXS to pull their own insights and reports very quickly without having to worry about whether the numbers match. This means the Business Intelligence & Strategy team can now stay focused on building models and driving insights, instead of just building reports.

WITH ETLEAP AND LOOKER, THE ENTIRE AXS TEAM IS ABLE TO FOCUS ON HIGH-VALUE TASKS THAT DRIVE THEIR BUSINESS FORWARD.

Since implementing both Etleap and Looker, the impact has positively affected the entire AXS team, as well as their clients.

First, the Data Engineering and Business Intelligence & Strategy teams are spending far less time on manual ETL and reporting work, and much more time on high-value tasks that contribute to internal growth, as well as client successes.

“These tools make our various teams more impactful across the business. For example, if our engineers were just doing all the ETL work manually, we would not be able to do even half of the work that we’re doing to drive the business forward. And the same applies with Looker. Right now, we’ve got people all over the organization looking at reports every day in Looker and answering their own questions about what’s going on with the business.

“Our lives have become much less about pulling reports or bringing in data, and more about really driving value for the company beyond the mundane day to day.” Beyond making life easier and their work more impactful, reporting has also become much faster. Previously, you may have had to wait weeks, or potentially months, to get access to a new data source so you could do your analyses. Now, you can solve that yourself in a couple of hours, without having to wait for other people.

Looker gives companies a single source of truth for all their data.

“With Looker, it was about getting report building out of analytics, so analysts can spend their time actually forming opinions, defining strategy, doing analyses, and digging into the data, rather than just building reports day in and day out.”

– Ben Fischer, Sr. Director of Business Intelligence

“It also makes iteration much faster. We can define something, put it into production, and report off of it. If three days later we realize we forgot something, it’s a two-minute fix rather than going back to engineering, having someone spend half a day on it.” said Ben.

Finally, having access to Etleap allows the team to easily look at data from different angles, making the analysis and insights for clients even more valuable. “Etleap has a function for modeling data, which is useful for reporting, as it allows you to build the aggregations you need to power impactful reports. We can have processes that run everyday and get a quick summary of the data from all different perspectives. Before, it would have taken an engineer a couple of days to build that.” said Ben.

With Etleap and Looker, the AXS team finally has the time and resources to focus on bigger initiatives, including GDPR, internationalization, increasing accessibility across the organization, and providing even more data services to clients. With these tools in their arsenal, the sky is truly the limit.

What is the “length” of a string?

Finding the length of a string in JavaScript is simple, you use the .length property and that’s it, right?

Not so fast. The “length” of a string may not be exactly what you expect. It turns out that the string length property is the number of code units in the string, and not the number of characters (or more specifically graphemes) as we might expect. For example; “😃” has a length of 2, and “👱‍♂️” has a length of 5!

Screenshot from Etleap’s data wrangler where the column width depends on the column contents.

In our application we have a data wrangler that lets you view a sample of your data in a tabular format. Since this table supports infinite scrolling, both rows and columns are rendered on demand as you scroll vertically or horizontally. We can’t render all the rows and columns at once since a table could easily include more than a hundred thousand cells, which would bring the browser to its knees.

“The ‘length’ of a string may not be exactly what you expect.”

Imagine if most rows of a column contains a small amount of data, such as a single word, but a single row contains more data, such as a sentence. If this row is outside of the currently viewed area we don’t want the column to expand as you scroll down, and we definitely don’t want to cram the sentence into the same small space that’s required by the word. This means that we need to find the widest cell in the column before rendering all the cells. It’s fast and straightforward to find the length of the content in each cell, however what if the cell contains emojis or other content where we can’t rely on the length property to give us an accurate value?

Code units vs. code points

Let’s do a quick Unicode recap. Each character in Unicode is identified by a unique code point represented by a number between 0 and 10FFFF.  Unfortunately, 10FFFF is a large number and requires 4 bytes to represent. To prevent having to allocate 4 bytes for each character, Unicode also specifies different encoding standards that can be used to interpret it, including UTF-16 which is the internal string encoding used by JavaScript.

UTF-16 is a variable length encoding, which means that it uses either 2 or 4 bytes for each code point depending on what is required. To differentiate, we say that UTF-16 uses one or two code units to represent one Unicode code point. The most used characters all fit into one code unit, however some of the more exotic characters, such as emojis, require two code units.

“It turns out that code points are not the only caveat regarding string lengths in JavaScript.”

This is where a problem arises. Since the .length property returns the number of code units, and not the number of code points, it does not directly map to what you may expect. As an example, the emoji “☺️” has a length of 2, even though it looks like only one character.

How can we work around this? ES2015 introduced ways of splitting a string into its respective code points by providing a string iterator. Both Array.from and the spread operator […string] uses this internally so both can be used to get the length of a string in code points.

Combining Characters

It turns out that code points are not the only caveat regarding string lengths in JavaScript. Another is combining characters. A combining character is a character that doesn’t stand on its own, but rather modifies the other characters around it. This is supported in Unicode, meaning that characters such as “è” is actually made up of two code points, “e” and  “\u0300”. This is widely used to combine emojis to get a new representation, such as “👱‍♂️” which is a combination of ” 👱” and ” ♂” with a zero width joiner (\uDC71) in between.

Working around this is more complicated. Currently there is no built in way of reliably counting graphemes in JavaScript. A current stage 2 proposal suggests adding Intl.Segmenter which will return the number of graphemes in a string, however there’s no guarantee that it will make it into the spec (there’s a polyfill for the proposal if you’re desperate.)

Environment Specific Differences

Did you know there’s a ninja cat emoji? Neither did we, because it’s a Windows-only emoji! It’s represented by a combination of “🐱” and “👤”. This means that Windows users will see this combination as one character, while other users will see it as two characters. Depending on the users choice of fonts, they could even see something completely different. You could try to prevent this issue by choosing a specific font for your web app, however that won’t be sufficient as the browser will still search through other fonts on your system if a character is not available in your chosen font.

“The various environment specific differences means that there’s generally no way of measuring the rendered width of a string mathematically. “

Checkmate?

The various environment specific differences means that there’s generally no way of measuring the rendered width of a string mathematically. Therefore, the only way to determine the pixel length is to render it and measure. For our use case in the wrangler, this is exactly what we wanted to avoid in the first place. However there are some optimizations that we can make. 

Instead of rendering all the strings in each column, we can split the strings into their corresponding graphemes and render them individually. This allows us to cache the pixel length of each grapheme we encounter. Since there are substantially fewer graphemes than unique strings in a table, this results in a significant reduction in total rendering. This way we can easily determine the correct width of a column, all while keeping the scrolling snappy and your browser happy.

High Pipeline Latency Incident Post-Mortem

Between 15:30 UTC on 8/27 and 14:00 UTC on 8/29 we experienced periods of higher-than-usual pipeline latencies. Between 04:00 and 10:00 UTC on 8/29 most pipelines were completely stopped. At Etleap we want to be transparent about system issues that affect customers, and this post summarizes the timeline of the incident and our team’s response, and what we are doing to prevent a similar incident from happening again.

Number of users with at least one pipeline with higher-than-normal latency.

What happened and what was the impact?

At around 11:30 UTC on 8/27 our ops team was alerted about spikes in two different metrics: CPU of a Zookeeper node and stop-the-world garbage collection (STW GC) time in a Java process responsible for orchestrating certain ETL activities. The two processes were running in different Docker containers on the same host. From this point onwards we saw intermittent spikes in both metrics and periods of downtime of the orchestration process, until the final fix was put in place at 14:00 UTC on 8/29. Additionally, at 15:30 UTC on 8/27 we received the first alert regarding high pipeline latencies. There were intermittent periods of high latency until 10:00 UTC on 8/29.

Incident Response

When our ops team received the first alert they followed our incident response playbook in order to diagnose the problem. It includes checking on potential causes such as spikes in usage, recently deployed changes, and infrastructure component health. The team determined that the issue had to do with the component that sets up source extraction activities, but found no other correlations. Suspecting an external change related to a pipeline source was leading to the increased garbage collection activity, they went on to attempt to narrow down the problem in terms of dimensions such as source, source type, and customer. Etleap uses a Zookeeper cluster for things like interprocess locking and rate limiting, and the theory was that a misbehaving pipeline source was causing the extraction logic to put a significant amount of additional load on the Zookeeper process, while at the same time causing memory pressure within the process itself. However, after an exhaustive search it was determined that the problem could not be attributed to a single source or customer. Also, memory analysis of the Java process with garbage collection issues showed nothing out of the ordinary.

The Culprit

Next, the team looked at the memory situation for the host itself. While each process was running within its defined memory bounds, we found that in aggregate the processes’ memory usage exceeded the amount of physical memory available on the host. The host was configured with a swap space, and while this is often a good practice, it is not so for Zookeeper: by being forced to swap to disk, Zookeeper’s response times went up, leading to queued requests.

Stats show Zookeeper node in an unhealthy state.

In other words, the fact that we had incrementally crossed an overall physical memory limit on this host caused a dramatic degradation of the performance of Zookeeper, which in turn resulted in garbage collection time in a client process. The immediate solution was to increase the physical memory on this host, which had the effect of bringing Zookeeper stats back to normal levels (along with the CPU and STW GC metrics mentioned before).

Zookeeper back in a healthy state after memory increase.

Next steps

We are taking several steps to prevent a similar issue in the future. First, we are configuring Zookeeper not to use swap space. Second, we’re adding monitoring of the key Zookeeper stats, such as latency and outstanding connections. Third, we are adding monitoring of available host physical memory to make sure we know when pressure is getting high. Any of the three configuration and monitoring improvements in isolation would have led us to find the issue sooner, and all three will help prevent issues like this from happening in the first place.

While it’s impossible to guarantee there will never be high latencies for some pipelines, periods of high latencies across the board are unacceptable. What made this incident particularly egregious was the fact that it went on for over 40 hours, and the whole Etleap team is sorry that this happened. The long resolution time was in large part because we didn’t have the appropriate monitoring to lead us towards the root cause, and we have learned from this and are putting more monitoring of key components in place going forward.