Intern
Internship – Data Engineering
Job Description
Duration: 3–4 Months
Location: Bengaluru / Remote
Mode: Full-Time Internship
About the Role
This internship is designed for candidates who want to work with real-world datasets, build ETL pipelines, and support data-driven product features. You’ll gain hands-on experience with modern data engineering workflows, including ingestion, transformation, storage, and analytics.
Key Responsibilities
- Build and maintain ETL pipelines for structured and unstructured data.
- Work with databases such as PostgreSQL, MongoDB, or Elasticsearch.
- Assist in developing APIs to expose processed data to applications.
- Collaborate with product and engineering teams to understand data requirements.
- Ensure data quality, validation, and error handling across pipelines.
Required Skills
- Proficiency in Python or SQL.
- Understanding of database concepts (schema, indexing, normalization).
- Familiarity with ETL concepts and data modeling.
- Basics of Linux and Git.
Nice to Have (Bonus)
- Exposure to tools like Apache Airflow, Kafka, Spark, or Pandas.
- Experience with cloud platforms (AWS, GCP, or Azure).
- Interest in data security, governance, or analytics.
Who Should Apply
- Students or early-career engineers interested in large-scale data systems.
- Candidates who like solving data problems—not just running ML models.
- Individuals eager to learn end-to-end data workflows.
What You’ll Gain
- Practical experience building production-ready data pipelines.
- Exposure to industry tools and distributed systems.
- Mentorship, code reviews, and structured training.
- Internship certificate on completion.
Apply for This Position
Fill out the form below to submit your application. We'll review it and get back to you soon.