Other Remote jobs you may be interested in
Fullstack Software Engineer
Java Developer
Lead Data Scientist
Staff Cloud Engineer
Senior Python Engineer
Senior Backend Engineer
Data Scientist at CoLab Software
Job details
At CoLab, we help engineering teams bring life-changing products to the world years sooner. Our product, CoLab, is the world’s first Design Engagement System (DES) - a category defining product that Engineering teams use to engage in meaningful, productive design conversations, catch preventable mistakes, and get to market faster. Our customers include the largest engineering organizations in the world such as Ford, Johnson Controls, Komatsu, and Polaris in the industrial equipment, consumer products, automotive, aerospace & defense, and shipbuilding industries.
We’re not just offering a job; we’re inviting you to join a groundbreaking team that drives innovation in the tech industry. In this role, you’ll have the opportunity to work on cutting-edge projects with a team that values your unique skills and perspectives. Our supportive and collaborative environment is designed to foster your professional growth and creative problem-solving. With competitive compensation, comprehensive benefits, and a strong commitment to work-life balance, CoLab Software is where your career can truly thrive and make a meaningful impact.
Frequently cited statistics show that people who identify with historically marginalized groups are likely to apply to jobs only if they meet 100% of the qualifications. We encourage you to help us break that statistic and apply even if you don’t meet every single qualification—your potential is what matters most to us.
As a Data Scientist, you’ll play a pivotal role in the development and deployment of our machine learning models, working closely with engineering, platform engineering, and product teams. You’ll ensure that our models are not only cutting-edge but also production-ready, scalable, and maintainable. This role is ideal for someone who thrives in a fast-paced SaaS environment and enjoys the blend of data science, engineering, and operational excellence.
- Build and Train Models: Design, implement, and deploy machine learning models to drive insights and automate business processes.
- Feature Engineering:Develop and optimize features for model training using large, complex datasets.
- Experimentation:Lead hypothesis-driven analysis and A/B testing to inform model and product development.
- Data Storytelling: Communicate findings through compelling visualizations and presentations, translating data into actionable insights for stakeholders.
- Model Deployment and Monitoring: Oversee end-to-end model deployment using MLOps best practices, ensuring models are robust, reproducible, and scalable.
- Pipeline Automation:Work with Platform Engineering to develop and maintain automated data pipelines to support continuous integration and deployment (CI/CD) for machine learning workflows.
- Model Monitoring and Maintenance: Set up monitoring and alerting for model drift, accuracy, and performance to maintain high-quality predictions in production.
- Optimize Infrastructure: Work with engineering and platform teams to optimize cloud infrastructure, model serving, and resource allocation.
- Cross-functional Collaboration: Partner with product managers, engineers, and other data scientists to integrate ML solutions into the product and deliver on key business objectives.
- Data Governance and Security:Ensure compliance with data privacy and security regulations in all aspects of data processing and model deployment.
- Continuous Improvement:Advocate for best practices and contribute to the development of reusable frameworks and processes that accelerate the ML lifecycle.
- Bachelor’s or Master’s degree in Data Science, Computer Science, Statistics, or a related field.
- 3+ years of experience in data science and machine learning, preferably in a SaaS environment.
- Strong programming skills in Python (experience with libraries like Pandas, Scikit-Learn, TensorFlow, PyTorch).
- Proficient in SQL and experience with data querying and transformation.
- Experience with cloud platforms (AWS, GCP, or Azure) and MLOps tools (e.g., MLflow, Kubeflow, Docker).
- Familiarity with CI/CD pipelines, version control (Git), and automated testing.
- Excellent problem-solving abilities, attention to detail, and the ability to work autonomously in a fast-paced environment.
- Strong communication skills with the ability to explain complex concepts to both technical and non-technical audiences.
- Willingness to raise your hand when you see something could be done / built better
- Experience working on SaaS, large-scale distributed systems would be considered an asset
- Consistent track record of building and maintaining highly scalable products would be considered an asset
- Model Accuracy and Impact:Achieving target accuracy and driving measurable business impact.
- Deployment Efficiency:Speed and reliability in moving models to production, with minimal downtime.
- Monitoring and Maintenance:Effective model monitoring with prompt issue detection and resolution.
- Cross-Functional Collaboration:Positive feedback and alignment with product, engineering, and platform teams.
- Process and Innovation Contributions:Development of reusable tools and frameworks to improve the ML lifecycle.