Interested in this position?
Upload your resume and we'll match you with this and other relevant opportunities.
Upload Your ResumeAbout This Role
Design, build, and scale high-impact data solutions within the Palantir Foundry platform. Transform complex, large-scale data into reliable, production-ready analytics to power AI-enabled decision making.
Responsibilities
- Build, maintain, and optimize scalable data pipelines and workflows in Palantir Foundry using Python, PySpark, and SQL
- Develop data models and ontologies, mapping datasets to objects with strong governance, lineage, and security
- Monitor and tune performance for large-scale data processing environments
- Partner with business stakeholders and engineering teams to translate requirements into technical solutions
- Support platform reliability by managing data ingestion, integration, and overall Foundry uptime
Requirements
- 2-3 years of experience in data engineering, ETL workflows, and big data technologies (e.g., Spark, Kafka, AWS)
- Strong proficiency in Python, PySpark, SQL, and data modeling best practices
- Hands-on Palantir experience, including Foundry certification (Code Repos, Workbooks, Pipeline Builder)
- Experience working in Agile environments with large, complex datasets
- Familiarity with collaboration and versioning tools such as JIRA, Git, and Confluence
Qualifications
- 2-3 years of experience in data engineering, ETL workflows, and big data technologies
Skills
Python
*
SQL
*
AWS
*
Kubernetes
*
Docker
*
Agile
*
Jira
*
Confluence
*
Git
*
S3
*
Kafka
*
Palantir Foundry
*
Spark
*
AWS Glue
*
PySpark
*
* Required skills
Certifications
Palantir Foundry certification (Code Repos, Workbooks, Pipeline Builder)
(Required)