Eneco (Eneco - Rotterdam)
Waarom werken bij Eneco
Wat is voor jou belangrijk als het gaat om werk? Is het de persoonlijke uitdaging of juist het gezamenlijke doel? Zijn het de sfeer of vooral de voorwaarden? Iedereen heeft zo zijn eigen reden om te kiezen voor een werkgever. En voor ons. Wat we wel kunnen zeggen; bij Eneco werkt niemand ‘zomaar’. De meest genoemde reden; omdat je hier bijdraagt aan de energietransitie. Je echt helpt Nederland omschakelen naar slimmer, schoner en duurzamer. Wat we ook vaak terugkrijgen; de kansen om je eigen stempel te drukken. Zo’n gezamenlijk doel bereik je immers alleen als je medewerkers vrijheid en verantwoordelijkheid geeft. En dat bij een bedrijf waar alles rond salaris en voorwaarden goed geregeld is. Genoeg redenen dus om voor Eneco te willen werken. We zijn benieuwd welke jou het meeste aanspreekt.
Lead the design and implementation of scalable data platforms, setting technical direction for pipelines, APIs, and production workflows.
Enable and mentor teammates and Data Scientists, ensuring Databricks, Snowflake, and DBT environments are production-ready and cost-efficient.
Drive engineering excellence through clean code, CI/CD, observability, and architectural decision-making that supports Eneco’s long-term vision.
At Eneco, we’re working hard to achieve our mission: sustainable energy for everyone. Learn more about how we’re putting this into action in our One Planet Plan.
As a Full Stack Data Engineer, you will play a crucial role in our diverse team, solving real-world forecasting problems through cutting-edge ML models. Our product leverages a modern data stack end-to-end: from data ingestion into Snowflake, to transformations with DBT, to running forecasts in Databricks, and finally exposing results through Python APIs and aggregation services.
This product has high visibility and impact at Eneco, driving innovation in how we forecast, optimize, and deliver energy solutions to our consumers.
Must Have:
Strong proficiency in SQL (experience with DBT and/or Airflow is preferred).
Solid experience writing clean, maintainable code (preferably in Python).
Hands-on experience with Databricks, specifically deployment and production use.
Strong knowledge of CI/CD pipelines and observability practices.
Nice to Have:
Familiarity or interest in MLOps and data science techniques.
Familiarity with cloud platforms (e.g., AWS or Azure). Infrastructure as Code (e.g. Terraform) is a plus.
Designing and maintaining robust SQL-based data pipelines (leveraging DBT and/or Airflow) for both streaming and batch workloads.
Building and maintaining clean, production-quality Python code, including APIs and aggregation services.
Supporting Data Scientists by ensuring their Databricks environments and workflows are production-ready and scalable.
Applying CI/CD pipelines and observability practices to guarantee reliable and maintainable deployments.
Contributing to application deployments and operations, ensuring solutions run smoothly in production.
Influencing architectural decisions and mentoring teammates to raise engineering standards across the team.
Collaborating with product managers, data scientists, and engineers to deliver high-impact forecasting products.
You will join a cross-functional team of Data Engineers, Machine Learning Engineers, Data Scientists, and Analysts, all working together to deliver forecasting solutions with real business impact. Collaboration and knowledge-sharing are at the core of how we work: we encourage experimentation, celebrate successes, and learn quickly from setbacks.
Our engineering culture values clean, maintainable code, automation, and end-to-end ownership. You’ll have the opportunity to shape data products from ingestion to deployment, contribute to technical decisions, and help ensure our solutions are reliable, scalable, and ready for production.
Together, we drive Eneco’s mission to innovate and accelerate the energy transition.
Then please reach out to our Recruiter: [email protected]