Cooper Standard
Livonia, Michigan, USA
Data Science Intern
Skip the busywork
ApplyBolt rewrites your resume for this exact role and hits submit. You just pick the jobs.
Resume tailored to this roleApplied in secondsTrack every application
Download the appAbout this role
We are seeking a Data Science Intern to help develop and deliver the core products of our AI stack: trained models of process physics and AI controllers for edge deployment. Unlike a traditional theoretical research role, this position focuses heavily on implementation. You will define, automate, and optimize the entire analytics path—including signal processing, training, validation, deployment, and monitoring. You will work within a dynamic team to deploy models for inference within mission-critical manufacturing environments, optimizing for real-time response.
Key Responsibilities
- Pipeline & Code Development: Design, develop, and iterate on algorithms using robust coding practices, including the use of code development tools like GitHub.
- Model Deployment: Experience deploying models in local or cloud environments. You will help deploy policies to the edge where they run real-time inference and continuously make adjustments to physical hardware.
- Data Engineering: Assess new data sources and interact with data ingestion pipelines or data warehouses.
- Automation: Automate the creation of automated controls and the analytics path to ensure the technology is scalable.
- Collaboration: Collaborate with process domain experts to determine what data to collect to ensure high data quality.
Required Technical Skills
- We prioritize candidates with strong software engineering fundamentals and fluency in modern data stacks.
- Core Languages: Proficiency in Python is required.
- Libraries & Frameworks: Expertise with data analysis and ML libraries, specifically TensorFlow, PyTorch, Numpy, Pandas, Scikit-learn, and Spark ML.
- DevOps & Environment: Fluency with standard DevOps tools. Experience working on Linux ML libraries and compilation.
- Cloud Technologies: Familiarity with cloud technologies such as AWS, Azure, Databricks, or Hadoop/Spark.
- Data Visualization: Expertise in data visualization techniques to present actionable insights.
Domain Knowledge
- While programming is the primary focus, familiarity with the following application areas is essential:
- Time Series & Deep Learning: Demonstrated skills in time series data analysis and Deep Learning architectures including LSTM, RNN, and CNN.
- Reinforcement Learning: Understanding of reinforcement learning techniques for developing optimal control policies.
Qualifications & Soft Skills
- Education: Currently pursuing or recently completed a Master’s Degree or PhD in Computer Science, Engineering, or Science.
- Problem Solving: Intellectual curiosity, entrepreneurial drive, and innovative thinking.
- Communication: Ability to explain moderately complex information in a concise manner to both specialists and non-technical audiences.