Data Engineer

Check out the general attributes.

Founded in 2013, Teknasyon is a tech company that manages all processes that digital products may require internally; including programming, design, sales and marketing. It has reached more than 1.5 billion users worldwide with the mobile apps it has developed. In addition, by developing its internal solutions into SaaS products such as Rockads, VerifyKit, Desk360 and Deepwall it offers these services to other companies with similar needs. Teknasyon also takes steps in growing and revitalizing the start-up ecosystem with its investor role and continues to support foreign and local ventures it sees potential in.

Apply now if you want to join the strong and ever-growing Teknasyon team and become part of this story! You can be the “Data Engineer” we are looking for!


  • Strong software engineering background,
  • Minimum 3 years of experience,
  • Python and/or Scala experience on real projects,
  • Hands-on experience with at least some of these big data technologies: Spark, Hadoop, Hive, Presto, Cloudera,
  • Experience with Stream processing (e.g., Kafka, Kinesis, Pub/Sub, Spark Streaming, Dataflow, Apache Flink),
  • Experience with ETL processes,
  • Experience with at least one of SQL databases like MySQL, PostgreSQL,
  • Experience with NoSQL databases like Amazon DynamoDB, Cassandra, or MongoDB,
  • Experience in cloud platforms (Amazon Web Services or Google Cloud Platform) with related tools such as Dataflow, BigQuery, EMR, Redshift, Athena, etc.
  • Building new data pipelines in GCP or AWS environment,
  • Familiarity with the Linux shell environment,
  • Knowledge on basics of DWH and exposure to BI projects,
  • Strong teamwork and communication skills, solution-oriented attitude,
  • Detail-focused, careful, analytical thinking ability,
  • High motivation for innovation,


  • Designing data engineering infrastructures, pipelines, data lakes, cloud architectures for ML/Data applications,
  • Bulk and streaming data pipelines and analytic solutions using tools such as Kubernetes, Kafka, Dataflow, Big Query, Airflow,
  • Design and develop data model for Data Scientists,
  • Ensure best performance for ETL by identifying needed indexes, partitions, materialized views and so forth,
  • Modify the physical data model for efficient data access paths,
  • Translate business requirements into data models that are easy to understand and used by different disciplines across the company,
  • Design, develop, and maintain scalable & reusable streaming or batch data pipelines for analysis, reporting, optimization, data collection, management and usage,
  • Ensure security, privacy and compliance for all data assets,


  • Being part of a big team who makes global apps,
  • A team that values open communication, multidimensional leadership and teamwork,
  • A dynamic work environment which relies on collaboration,
  • Never-ending opportunities for learning and improvement,
  • Internal, external and online training,
  • An office full of joy and activities,
  • Monthly social events organized to lift the team spirit,
  • Bonus pay when your colleague thanks you,
  • Bonus pay to give a reference for your friend to be hired.