close icon logo Jobs Job Posting


Data Engineering & Analytics @ Telstra

location icon Telstra ICC Bengaluru
Full Time

Employment Type


Closing Date

6 July 2024 11:59pm

Job Title

Data Engineering & Analytics

Job Summary

As a Data Specialist you thrive on finding patterns in large datasets and translating business requirements into structured datasets. You collaborate with your colleagues to drive data led decision making across Telstra. By providing the right insights, in the right way at the right time you're central to our success.

Job Description

About Telstra

We're Australia's leading telecommunications and technology company. And with a global presence in more than 22 countries, we have a strong global footprint. Our purpose is to build a connected future so everyone can thrive. We're all about providing the best experience and delivering the best tech on the best network. This includes making Telstra the place you want to work. 

We offer a full range of services and compete in all telecommunications markets throughout Australia and are the most well-known brand in technology and communications industry. 

We have operations in more than 20 countries, including in India. In India we are a licensed Telecom Service provider (TSP) and have extended our global networks into India with offices in Bangalore, Mumbai and Delhi. We’ve opened an Innovation and Capability Centre (ICC) in Bangalore and have a presence in Pune and Hyderabad. In India, we’ve set out to build a platform for innovative delivery and engagement that will strengthen our position as an industry leader. We’re combining innovation, automation and technology to solve the world’s biggest technological challenges in areas such as Internet of Things (IoT), 5G, Artificial Intelligence (AI), Machine Learning, and more. 


Here’s what you can expect from us:

  • Hybrid way of work, which will allow us to enjoy the benefits of both remote and in-office collaboration. This means that we will have more flexibility, autonomy, and diversity in our work environment, while also maintaining the connection, culture, and creativity that we value as a team. We believe that this is the best way to support our employees' well-being, productivity, and innovation in the post-pandemic world. 
  • Flexible working. Choose when and how you work so you can be at your best. 
  • Maternity Leave. Up to 26 weeks provided to the birth mother with benefits for all child births. 
  • (Women in Tech) Initiative to promote women in tech. We believe that diversity and inclusion are essential for innovation and growth, and we want to support and empower more women to pursue careers in STEM fields. 
  • Pay for performance. We recognize outstanding contributions through our competitive incentive programs. 
  • Insurance benefits. Receive generous insurance benefits such as medical, accidental and life insurance. 
  • Unlimited learning. Level up your skills with access to 17,000 learning programs. Learn ‘on the job’ and achieve university credits towards degrees and master’s programs. 
  • Global presence. With a global presence across 22 countries, there are many opportunities to work where we do business. 
  • Function overview 
  • Make a difference as part of Product and Technology, your mission will be simple: capture market value at scale by building products our customers love that are simple to experience, seamless to deliver, and are profitable to the core. 

What you'll do:

Being part of Data Engineering means you'll be part of a team that focuses on extending our network superiority to enable the continued execution of our digital strategy. With us, you'll be working with world-leading technology and change the way we do IT to ensure business needs drive priorities, accelerating our digitisation programme.   

We are seeking a highly skilled Data Engineer with expertise in Python, SQL, REST API and exposure to Azure App Services. The successful candidate will be responsible for designing, developing, and maintaining data pipelines using Spark, Python, Scala, and related technologies. The Data Engineer will also be responsible for ensuring data quality, data security, and optimal performance of the data pipelines. Any new engineer would be mostly into developing reusable data processing and storage frameworks that can be used across data platform.  

Job Location – Bangalore  

Key Responsibilities: 

Data Engineer role is to coordinate, and execute all activities related to the requirements interpretation, design and implementation of Data Analytics applications. This individual will apply proven industry and technology experience as well as communication skills, problem-solving skills, and knowledge of best practices to issues related to design, development, and deployment of mission-critical systems with a focus on quality application development and delivery.

This role is key to the success of the Data Engineering capability at Telstra and will be responsible and accountable for the following:

  • Design, develop, and maintain data pipelines using Spark with Java and related technologies on AWS cloud eco system.
  • Design, develop and maintain Java based applications.
  • Work with high volume data and ensure data quality and accuracy.
  • Implement data security and privacy best practices to protect sensitive data.
  • Develop and maintain documentation on data pipeline architecture, data models, and data workflows.
  • Monitor and troubleshoot data pipelines to ensure they are performing optimally.
  • Stay up to date with the latest developments in AWS, Spark, Java, and related technologies and apply them to solve business problems
  • Optimize data pipelines for cost and performance.
  • Automate data processing tasks and workflows to reduce manual intervention.
  • Ability to work in Agile Feature teams.
  • Provide training and educate other team members around core capabilities and helps them deliver high quality solutions and deliverables/documentation.
  • Self-Motivator to perform Design / Develop user requirements, test and deploy the changes into production
  • Knowledge on AI & GEN AI are better to have

Who we're looking for: 

  • Hands-on experience in the following on Spark Core, Spark SQL, SQL
  • Hands-on experience on AWS Ecosystem (with Java, Pyspark/Flink/Kinesis etc).
  • Hands-on experience on Java development, Spring Boot and Spring frameworks
  • Experience of working on File formats Hbase.
  • Experience with high volume data processing and data streaming technologies
  • Hands on experience on data modelling, schema design, and development using SQL and related technologies.
  • Familiarity with data security and privacy best practices
  • Good exposure on TDD (Test driven development)
  • Build algorithms to enhance data processing and analysis.
  • Exposure on using CI tools like Git, Bitbucket, Github, Gitlab, AWS DevOps
  • Exposure on using CD tools like Jenkins, Bamboo, AWS DevOps
  • Cloud exposure AWS
  • Experience and knowledgeable on the following: aws data offerings – opensearch, Dynamo DB, Flink, Kinesis, S3, EMR etc
  • Interpret trends, patterns, and anomalies within datasets.
  • Exposure of working on Power BI (Better to have)
  • Experience/knowledge on building reusable frameworks.
  • Good understanding of Data Architecture and design principles. (AWS Lambda architecture and Step functions)
  • Explore ways to improve data quality and consistency.
  • Exposure to Code Quality - Static and Dynamic code scans
  • Good knowledge of NoSQL and SQL Databases / HBase / POSTGRES etc.
  • Experience with enterprise data management, Datawarehouse, data modelling, Business Intelligence, data integration.
  • Work closely with cross-functional teams on data-related projects.
  • Expertise in SQL and stored procedure.
  • Experience in designing solutions for multiple large data warehouses with a good understanding of cluster and parallel architecture as well as high-scale or distributed RDBMS and/or knowledge on NoSQL platforms.
  • Propose best practices/standards
  • Translate, load and present disparate datasets in multiple formats/sources including JSON, XML etc.
  • Should be able to provide scalable and robust solution depending on the business needs.
  • Should be able to compare tools and technologies and recommend a tool or technology.
  • Should be well versed with overall IT landscape, technologies and should be able to analyse how different technologies integrates with each other
Free Subscription
The subscription is free.
Your data is never shared with third party companies and people.
Multi Sources
Jobs are taken from 9+ sources daily and served as up-to-date.
Special Filters
The subscription is created with job type, location and position.
facebook logo amazon logo apple logo netflix logo google logo binance logo
feedback icon
close icon