Job Description
<Job Responsibilities>
- Design, build, and maintain robust, scalable data pipelines to support data processing and analytics
- Develop and maintain backend APIs and services to support analytics and AI features
- Work within the Databricks environment to build, test, and optimize data workflows
- Collaborate with data scientists and consultants to bring models into production
- Apply best practices in Git version control and manage CI/CD pipelines for deployment
- Communicate directly with technical and non-technical stakeholders in an agile setting
Job Requirement
<Necessary Skill / Experience >
・Education: Bachelor’s Degree in Computer Science or related field
・Language: English - Intermediate Level (Equivalent to TOEIC 800)
・Experience: More than 3 years of experience in data engineering and/or backend development
- Solid experience with cloud services, especially AWS and Azure
- Proficient in Python, SQL, and building RESTful APIs
- Familiar with Git workflows and CI/CD automation tools (e.g., GitHub Actions, GitLab CI)
- Willingness to learn and certify in Databricks — full training provided upon joining
<Preferable Skill / Experience>
- Previous experience in Spark and Databricks projects