Build the Common Data Language with the Metrics Platform Start Now
Kyligence Zen Kyligence Zen
Kyligence Enterprise Kyligence Enterprise
Cloud
Services
By Use Cases
By BI Tools
Customers
Resources
Apache Kylin
About
Partners
In this blog, we’ll look at the third building block for a “PERFECT” Query Performance — “Spark Tuning”. If you have not read the previous blogs of this series, please go to the following links — Part 1, Part 2, Part 3.
Hunting for SSAS Alternatives on AWS or Azure or Google Cloud?
Check it out here.
Parquet file block size is one of the hidden tricks that can be used to fine-tune a model for extreme performance. This parameter represents the size of a data block in a parquet file and manages the minimum amount of data a Spark task reads. Slicing a parquet file into smaller chunks by making this number smaller will produce higher parallelism and thus a shorter execution time when resources are infinite.
However, in resource-limited settings, things are a bit more complicated. The belief that “The smaller the block size, the better the query performance” is not applicable here. And also, there is no fixed formula to calculate a magic number for the block size because this number is more an experimental conclusion depending on various factors, including resources available and the complexity of queries. I have found a sweet spot through loads of testing and hands-on work, with an excellent query performance guaranteed and high resource utilization achieved. To make your life easier, here are the best pairs of parquet block sizes for both writing and reading jobs you should get started with:
Start by experimenting and comparing the performance of a model in 32m and 64m. 32m or 64m should give the relatively same query performance.
Parquet file block size — 32m
In most cases, 64m should give the best query performance.
Parquet file block size — 64m
Kyligence Workspace Config Center
The parquet block size for model building jobs (aka Spark writing jobs) can be set up either in the Kyligence workspace config centre or on a Kyligence Model setting page. For query jobs (aka Spark reading jobs), this parameter can only be configured at the workspace level. It is highly recommended those two numbers should match each other to ensure the best query performance.
Making the Final Touches to a Kyligence Data Model
Only One Step Away from a “Perfect” Kyligence Data Model
Stay Tuned!
Kyligence Zen intelligently manages data in the retail industry. Read to learn how to develop the "North Star Metric" system to track goals and progress.
Kyligence introduces the deployment of OLAP on top of Azure, including data sources, features, benefits, and prerequisites. Learn more about Kyligence for Azure.
What's OLAP on big data? What're its benefits? Here's everything you need to know about OLAP.
Learn how one big fast-food brand leveraged Kyligence capabilities and implemented precision marketing to maximize profit opportunities.
Already have an account? Click here to login
预约演示,您将获得
完整的产品体验
从数据导入、建模到分析的全流程操作演示。
行业专家解惑
与资深行业专家的交流机会,解答您的个性化问题。
您还可以在云平台中 部署 Kyligence
直接获得 30 天免费试用
请填写真实信息,我们会在 1-2 个工作日内电话与您联系。