Data Engineer

Taipei / KKBOX - Engineering / Permanent

We, KKBOX data engineers, are responsible for collecting, processing and managing data to support different departments for various data applications.
- App developers need data to understand user’s in-app behavior for product improvement.
- License team need accuracy metering data for royal reports.
- Business insight and data analysts also need data for finding business insight, and analyzing performance of different channels and partners.

KKBOX is a data driven company. As a data engineer in KKBOX, you’re responsible for building and operating a data platform that collects, stores and processes the data generated by running KKBOX service. The challenges for data engineers in KKBOX are always the scale, performance and the running cost. We heavy leverage AWS data products, such as Athena and EMR. We also developed in-house platforms using open source, such as presto and Hive.

 If you are passionate for data engineering, believe the well-architectured data platform can bring much value to KKBOX, help in better product development, business growth and operation efficiency. KKBOX data engineer is a perfect position for you. Come join us together, become one of the KKBOX data guys.


  • Data process pipeline (ETL and data aggregation) maintenance and performance improvement.
  • Data modeling and schema design, new data feed onboarding and process automation.
  • Support Business Insight team for analysis and integrate data modelTechnical evaluation, data platform architecture design.
  • Continuous optimization working flow for data processing.
  • Requirements:

  • 3+ years of experience with hands-on data processing or analytics.
  • Hands-on experience of Python / Scala / Bash data processing.
  • Familiar with AWS data products. ex. Athena、EMR、Glue、S3.
  • Experience with project management and capable for cross team communication.
  • Capability to solve problems independently. Proactive to learn new technologies.
  • Nice To Have:

  • Hands-on experience of using Apache Spark / Hadoop / Presto for data processing.
  • Hands-on experience of API service development (Please list public repository URL. Language is not limited.).
  • Hands-on experience of building development or production environments by Infrastructure-as-code tools, ex. Terraform、Ansible、Puppet. 
  • Apply Now