Role & Responsibilities
In this role, you will design and implement a solution that connects machine-generated data with R&D analytics and simulation workflows.
Your key objectives:
- Design and implement an end-to-end data pipeline from field data (IoT + measurement data) to analytics
- Enable automated data processing and delivery for product development teams
- Build solutions that generate usage profiles and insights for simulation and engineering design
- Define and implement a scalable cloud-based data architecture
- Integrate data engineering solutions with engineering and simulation tools
- Collaborate with stakeholders to prioritize use cases and bring them into production
This is not just a pipeline project — it combines:
- data engineering
- analytics
- and real-world product development needs
Requirements
We are looking for someone with strong experience in similar environments:
Must-have skills
- Proven experience in designing and building data pipelines (ETL/ELT)
- Strong proficiency in Python (data processing, analytics)
- Experience with cloud platforms (e.g. Snowflake, Databricks, Azure, AWS)
- Solid understanding of data architectures (batch / streaming, data lake / warehouse)
- Experience working with scalable data processing systems
Nice to have
- Experience with IoT or sensor data
- Understanding of engineering or product development data
- Experience with tools such as:
- Snowflake
- Databricks / PySpark
- Exposure to data analytics or machine learning
- Ability to work in a client-facing role and facilitate technical discussions
Expected Outcome
Success in this project means:
- Data flows automatically from machines to analytics environments
- R&D teams have access to clean, usable, and structured data
- The solution is scalable and extendable to new use cases
- Data can be directly utilized as input for simulation and product design