Turing株式会社 全ての求人一覧j. For English Speaker の求人一覧
Turing株式会社 全ての求人一覧

3017_Data Engineer (Autonomous Driving Model Evaluation Infrastructure & Analytics)

About the role ◆This role is open to software engineers with experience in data infrastructure and application development. The mission is to design, build, and operate the evaluation and analytics infrastructure for E2E autonomous driving models. At Turing, continuously evaluating model quality and performance from real-world driving data logs and simulator data — and feeding those insights back into model training and system improvement — is essential. You will be responsible for building the systems that support development decision-making through analytics infrastructure, BI tooling, and evaluation metric design. ■What you will work on ・Design, implementation, and improvement of evaluation metric frameworks for E2E autonomous driving models ・Analysis of autonomous driving data logs and implementation of reporting processes ・Close collaboration with the ML team to define and align on evaluation requirements ・Design, build, and operation of evaluation execution infrastructure and automated evaluation pipelines (Amazon EKS, AWS Lambda, Amazon API Gateway, FastAPI, PostgreSQL, etc.) ・Design and sharing of evaluation dashboards using DWH and BI tools (Databricks, PySpark, etc.) ■Defining evaluation standards in a world without ground truth Autonomous driving is a humanity-scale challenge at the intersection of AI, software, and data. The role of this position within that challenge is to objectively define what "good" looks like for autonomous driving models — and make that definition shareable and actionable. In E2E autonomous driving, humans do not define the rules. What matters is how the model understands and interprets the world, and how that understanding translates into outcomes. That is why an evaluation infrastructure that can explain — based on data, not intuition — whether a model is genuinely improving, what has gotten better, and where the remaining challenges lie, is absolutely essential. By leveraging your experience in data infrastructure and application development to design evaluation metrics and visualizations based on real driving data and simulation results, you will make the state of the model understandable to everyone on a shared axis — and drive development decision-making forward. This is a critical role.

EN_2001_Machine Learning Engineer — Open Position

About Turing ◆This role is open to ML engineers with expertise in one or more of the following domains: Autonomous Driving, Computer Vision, LLM, NLP, or Physical AI.◆ Turing is a deep-tech startup on a mission to achieve full self-driving. We have pursued an End-to-End (E2E) approach to autonomous driving AI since day one, under our mission "We Overtake Tesla." Today, our system can navigate Tokyo streets for 30+ consecutive minutes without human intervention — and we believe we are standing right at the inflection point of exponential model growth. Our next step is to evolve the E2E model into a Physical Foundation Model that unifies physical behavior with world understanding — combining driving, language, and multimodal reasoning to handle the complex, unpredictable scenes that only humans could previously manage. ■About this open position This is an open application form for ML engineer candidates who are interested in Turing but haven't found an exact match among our current active listings. After an initial document screening, we will arrange a matching interview to identify the most suitable team and role based on your background and interests. In the "Message to the team" field of your application, please feel free to share: -Your strengths and areas of expertise -The type of work or approach you are looking for -Your research/development experience (re-implementation, deployment, improvements, etc.) ■Example positions you may be matched to: ・2010 — ML Engineer (Autonomous Driving Model Development) [https://herp.careers/v1/turing/oMZZ-LyQ683f](https://herp.careers/v1/turing/oMZZ-LyQ683f) ・2011 — ML Engineer (Autonomous Driving VLA Model Development) [https://herp.careers/v1/turing/aHze1GwzB0ZG](https://herp.careers/v1/turing/aHze1GwzB0ZG) ■What you will work on ・Research, reproduce, and implement state-of-the-art papers and existing implementations ・Evaluate existing models and implementations against our proprietary datasets ・On-vehicle model evaluation and experiment management ・Implementation of autonomous driving VLA models and E2E self-driving models ・Dataset creation and improvement ・Auto-labeling pipeline development and improvement ■Our development approach ・We pursue both data-centric and model-centric approaches in parallel. ・E2E autonomous driving is still an open problem — the model you build could become the next industry standard. ・Join us in exploring every possible approach in an uncharted field. Our development cycle: Build dataset & model → Drive test → Analyze logs → Manage model. You will improve the system by experiencing your model in the real world, not just on paper.

EN_2010_Machine Learning Engineer(E2E Autonomous Driving Model Development)

◆This role is open to ML engineers with expertise in one or more of the following domains: Autonomous Driving, Computer Vision, or Machine Learning. At Turing, we are developing an End-to-End autonomous driving ML model — a single machine learning model that takes input from vehicle-mounted cameras and directly outputs vehicle control commands. Autonomous driving model development is a true multi-disciplinary challenge, spanning far beyond machine learning alone. There are many areas to contribute: data collection, dataset creation (data quality improvement, calibration, coordinate transforms), and model training (architecture design, training efficiency improvements), among others. We are looking for engineers with a background in autonomous driving — as well as engineers with outstanding expertise from software, robotics, or other industries. Let's tackle one of humanity's grand challenges together. ◾️What you will work on We work on a wide range of problems — not just model architecture improvements, but also data quality and quantity challenges. The examples below are just a subset; if any of them resonate with your experience, we encourage you to apply. ◆Example: ・Implementation of End-to-End autonomous driving models ・Planning and strategy for data collection ・Dataset creation and improvement -Auto-labeling model implementation and improvement -Camera and sensor calibration ・Implementation of model training algorithms ・Optimization and speed-up of model training code ・On-vehicle model evaluation and experiment management ・Research, reproduction, and implementation of state-of-the-art papers ◾️Our development approach We pursue both data-centric and model-centric approaches in parallel. Challenges arise from many angles — data quality issues with various root causes, architecture and backbone exploration — giving the team a wide solution space to work in. We also run large-scale training jobs on GPU clusters, so optimizing training code for speed is an active area of focus. E2E autonomous driving is still an open problem. The model you build could become the industry standard for the next generation of self-driving systems. 【Test your model in the real world】 Our development cycle: Build dataset & model → Drive test → Analyze experiment logs → Manage model. You will iterate on your models by experiencing them firsthand in a real vehicle — not just on paper. Use feedback from the physical world to drive your development forward.

EN_2011_Machine Learning Engineer (Autonomous Driving VLA Model Development)

About Turing ◆This role is open to ML engineers with expertise in machine learning, autonomous driving, or computer vision, as well as software engineers with experience in large-scale MLOps or data infrastructure development, or engineers who have worked on ML or software engineering in the robotics domain. At Turing, we are developing an End-to-End autonomous driving model — a system that takes input from vehicle-mounted cameras and directly controls the vehicle. Our mission is to develop a fully autonomous driving system. We are pursuing this through two main directions: training a neural network of several million parameters to imitate expert driver behavior across diverse scenarios, and running foundation models — including Vision-Language-Action (VLA) models — on the vehicle to handle a wide variety of driving situations. Applying VLA models to autonomous driving is still an emerging field even on a global scale, requiring us to explore the latest research papers and development cases while advancing the work experimentally. This means we need not only machine learning expertise, but also a wide range of engineering capabilities to drive the project forward. We are looking for members who are ready to take on these challenges. Development issues exist at many layers — from building training pipelines for foundation models to model quantization and optimization. ■What you will work on ※ You will not be responsible for all of the following. You will focus on areas where your strengths shine, while also expanding into adjacent domains. ・Data calibration and coordinate transformation between different sensor devices ・Dataset creation and improvement ・Research, reproduction, and implementation of papers and existing implementations ・Evaluation of existing implementations using our proprietary datasets ・Model quantization and optimization ・On-vehicle model evaluation and experiment management ・Implementation of autonomous driving VLA models ・Auto-labeling implementation ■Enjoy being at the frontier of Embodied AI Giving AI a physical presence and enabling it to deliver value in the real world — autonomous driving is exactly where humanity is pushing this frontier today. You will need to build unique ML pipelines while leveraging the knowledge already accumulated within the company. We are looking for someone who can drive development in a domain with almost no existing reference points. ■Test your model in the real world Our development cycle: Build dataset & model → Drive test → Analyze experiment logs → Manage model. You will iterate on your models by experiencing them firsthand in a real vehicle. Use feedback from the physical world to drive your development forward. ■Who is thriving in this role ・NLP researchers from research institutions ・System engineers / data scientists from system development companies ・ML / software engineers from large-scale ad-tech companies

EN_2012_Machine Learning Engineer (Large-Scale Training Infrastructure)

About the role ◆This position is for software engineers with an ML background who want to take on large-scale, cutting-edge development challenges.◆ At Turing, we are developing an End-to-End autonomous driving ML model — a system that takes input from vehicle-mounted cameras and directly controls the vehicle. In this role, you will take ownership of the full stack — from designing and implementing large-scale distributed training systems to optimizing training pipelines. Working with massive volumes of driving data (video, vehicle logs, etc.), your mission is to build scalable, high-performance machine learning pipelines that make the most of available training resources. -Examples of what you will work on: ・Design and implement pipelines for large-scale data ingestion, processing, and curation, with full lifecycle management of datasets ・Build training infrastructure capable of handling petabyte-scale multimodal data ・Accelerate video decoding during training ・Design simple yet extensible data schemas for integrating diverse information Through these efforts, you will evolve the model development cycle itself through the power of engineering — achieving both improved model performance and faster development velocity, while maximizing the scalability and performance of the overall system. Unconstrained by existing frameworks, you will design and evolve training infrastructure from the ground up with the next scale in mind — pushing the boundaries of autonomous driving development through engineering.

EN_2014_ Reinforcement Learning Engineer (End-to-End Autonomous Driving Model Development)

-About the role- ◆This role is open to reinforcement learning engineers and researchers with expertise in the field of reinforcement learning.◆ Turing is a deep-tech startup on a mission to achieve full self-driving. The mission of our reinforcement learning team is to realize a robust E2E autonomous driving system that operates in the real world using reinforcement learning technology. To achieve this, we are working on a 3D Gaussian Splatting-based simulator for evaluation and training, a large-scale distributed reinforcement learning infrastructure, and control systems (such as MPC). ■What you will work on ・Research and development of robust E2E autonomous driving models that operate in the real world ・Development of simulators for E2E autonomous driving ・Building large-scale distributed reinforcement learning infrastructure ・Research and development of reinforcement learning algorithms to realize E2E autonomous driving models capable of real-world operation ・Performance validation of reinforcement learning models through actual driving tests

EN_2015_Machine Learning Engineer (Autonomous Driving — World Model / Video Generation Model Development)

-About the role- ◆This role is open to engineers with a machine learning background who have worked on large-scale development of video generation models or foundation models, as well as the MLOps / data infrastructure, distillation, optimization, and on-device deployment that supports them. (Experience in autonomous driving / CV / robotics is a plus.)◆ At Turing, we are developing an End-to-End autonomous driving model that takes input from vehicle-mounted cameras and directly controls the vehicle. As we work toward full self-driving, we are in an exploratory phase — combining imitation learning with the rapid advances in video generation and world model research to actively find what works. In particular, large-scale pretraining on not only driving data but also general video data (video generation, self-supervised learning, etc.) has been shown to significantly boost downstream task performance (behavior prediction, planning, etc.), and this trend is growing stronger. With this in mind, Turing is pursuing a World Action Model (WAM) approach — a unified pipeline that spans modern model families such as video generation models, image/video foundation models (e.g., DINOv3, V-JEPA-style concepts), and world models — from training at scale, through validation and downstream task integration, to distillation, quantization, inference optimization, and final deployment on real vehicles. We are looking for members who can take these ambitious research directions all the way to something that actually works in the real world. ■What you will work on ・ML development centered on WAM (World Action Model) for autonomous driving ・Large-scale pretraining (scaling training with driving data + general video data, etc.) ・Modeling, implementation, and validation using video generation models and world models ・Validation and application of image/video foundation models based on self-supervised learning (e.g., DINOv3, V-JEPA-style, etc.) ・Application to downstream tasks (e.g., behavior prediction, planning), evaluation design, and improvement ・Model compression and acceleration via distillation, quantization, and inference optimization ・Building and improving experimental infrastructure (data pipelines, reproducibility, experiment management, model operations) ・Literature review and implementation validation in related areas (Transformers, robotics, world models, etc.) ■Enjoy being at the frontier of Physical AI Giving AI a physical presence and enabling it to deliver value in the real world — autonomous driving is exactly where humanity is pushing this frontier today. You will need to build unique ML pipelines while leveraging the knowledge already accumulated within the company. We are looking for someone who can drive development in a domain with almost no existing reference points. ■Test your model in the real world Our development cycle: Build dataset & model → Drive test → Analyze experiment logs → Manage model. You will iterate on your models by experiencing them firsthand in a real vehicle. Use feedback from the physical world to drive your development forward. ■Who is thriving in this role ・Engineers with strengths in robotics, world models, or autonomous driving (behavior prediction, planning, etc.) who have led model development ・Engineers who have pursued large-scale data preprocessing, filtering, and data quality design, and have achieved training reproducibility and scaling in practice ・Engineers from research labs or corporate research teams who have taken exploratory topics all the way from implementation → validation → improvement to tangible results ・Engineers who can quickly catch up with the work of leading researchers and recent papers, reproduce and extend them, and connect the results to product or on-vehicle validation

EN_3001_Software Engineer (Open Position)

-About the role- ◆This role is open to engineers with deep expertise in software domains that contribute to the overall realization of autonomous driving systems — including web service development, MLOps, and low-level software development (including Linux and embedded systems). Turing is a deep-tech startup on a mission to achieve full self-driving. Under our mission "We Overtake Tesla," we have pursued an End-to-End (E2E) approach to autonomous driving AI since day one. Today, our system can navigate Tokyo streets for 30+ consecutive minutes without human intervention — and we believe we are standing right at the inflection point of exponential growth across both our models and the overall system. Our next step is to evolve the E2E model into a Physical Foundation Model that unifies physical behavior with world understanding — combining driving, language, and multimodal reasoning to handle the complex, unpredictable scenes that only humans could previously manage. ■What we expect from software engineers The key to pushing this mission forward is using the power of software to continuously accelerate the development cycle from training → validation → real vehicle, and to make autonomous driving AI work as a system in the real world. This means: ・A continuous data flywheel that keeps feeding model improvements ・A validation workflow where driving experiments, evaluation, and analysis operate as one integrated loop ・Software design that keeps the full autonomous driving system — sensors, inference, and control — running safely and at high performance Turing's software engineers are not simply implementing individual features. They are the core of what keeps autonomous driving AI advancing as a coherent system. ■About this open position This is an open application form for software engineer candidates who are interested in Turing but haven't found an exact match among our current active listings. After an initial document screening, we will arrange a matching interview to identify the most suitable team and role based on your background and interests. In the "Message to the team" field of your application, please feel free to share: ・Your strengths and areas of expertise ・The type of work you are looking for (design, implementation, operations, optimization, etc.) ・Information that demonstrates your past results (GitHub, design documents, project overviews, talks / articles, etc.) ■Example positions you may be matched to ▽MLOps & model development 3011 — Software Engineer (Autonomous Driving MLOps Infrastructure Development) https://herp.careers/v1/turing/Qc3t_q0FYFq 3012 — Software Engineer (Autonomous Driving VLA Model Development) https://herp.careers/v1/turing/lqrK5k1NqzyU 3013 — Robotics Software Engineer (Autonomous Driving Development) https://herp.careers/v1/turing/SMo30RyV5I9L 3016 — Computer Vision Engineer (Image Processing) https://herp.careers/v1/turing/uMUjp4fFtvZ7 ▽Autonomous driving system development 3032 — Software Engineer (OS, Embedded Systems & Firmware Development) https://herp.careers/v1/turing/01gtnhsLWgVe 3034 — Software Engineer (Middleware & In-Vehicle System Development) https://herp.careers/v1/turing/abHB5x7zASkv 3035 — Software Engineer (In-Vehicle Application Development) https://herp.careers/v1/turing/IFKgdYLwpdc7

EN_3011_Software Engineer (MLOps)

-About the role- ◆This role is open to software engineers with experience developing and operating large-scale, high-reliability data infrastructure and distributed systems. At Turing, we are developing an End-to-End autonomous driving ML model — a system that takes input from vehicle-mounted cameras and directly controls the vehicle. To scale our development, we need to address a wide variety of bottlenecks that stand in the way of continuous data and model improvement. The mission of this role goes beyond simply building infrastructure — it is to identify problems across teams, form hypotheses, explore solutions, and drive resolution. ■What you will work on ・Continuous improvement of data and models in collaboration with stakeholders ・Building and operating large-scale data infrastructure ・Optimizing processing pipelines and data transfer across cloud and on-premises environments ・Designing and implementing internal tools, APIs, and web services ・System-wide architecture design and performance tuning Turing's development target is autonomous driving. The software you build will be deployed on real vehicles and operate in the physical world. This presents technical challenges unlike those in web or SaaS development. This is a domain where strong ML engineers and software/infrastructure engineers must work as one — and where your engineering experience can be applied to one of humanity's grand challenges.

EN_3012_Software Engineer (Autonomous Driving VLA Model Development)

-About the role- ◆This role is open to ML engineers with expertise in machine learning, autonomous driving, or computer vision, as well as software engineers with experience in large-scale MLOps or data infrastructure development, or engineers who have worked on ML or software engineering in the robotics domain. At Turing, we are developing an End-to-End autonomous driving model — a system that takes input from vehicle-mounted cameras and directly controls the vehicle. Our mission is to develop a fully autonomous driving system. We are pursuing this through two main directions: training a neural network of several million parameters to imitate expert driver behavior across diverse scenarios, and running foundation models — including Vision-Language-Action (VLA) models — on the vehicle to handle a wide variety of driving situations.Applying VLA models to autonomous driving is still an emerging field even on a global scale, requiring us to explore the latest research papers and development cases while advancing the work experimentally. This means we need not only machine learning expertise, but also a wide range of engineering capabilities to drive the project forward.We are looking for members who are ready to take on these challenges. Development issues exist at many layers — from building training pipelines for foundation models to model quantization and optimization. ■What you will work on ※ You will not be responsible for all of the following. You will focus on areas where your strengths shine, while also expanding into adjacent domains.・Data calibration and coordinate transformation between different sensor devices ・Dataset creation and improvement ・Research, reproduction, and implementation of papers and existing implementations ・Evaluation of existing implementations using our proprietary datasets ・Model quantization and optimization ・On-vehicle model evaluation and experiment management ・Implementation of autonomous driving VLA models ・Auto-labeling implementation ■Enjoy being at the frontier of Embodied AI Giving AI a physical presence and enabling it to deliver value in the real world — autonomous driving is exactly where humanity is pushing this frontier today. You will need to build unique ML pipelines while leveraging the knowledge already accumulated within the company. We are looking for someone who can drive development in a domain with almost no existing reference points. ■Test your model in the real world Our development cycle: Build dataset & model → Drive test → Analyze experiment logs → Manage model. You will iterate on your models by experiencing them firsthand in a real vehicle. Use feedback from the physical world to drive your development forward. ■Who is thriving in this role NLP researchers from research institutions System engineers / data scientists from system development companies ML / software engineers from large-scale ad-tech companies

EN_3013_Robotics Software Engineer (Autonomous Driving Development)

-About the role- ◆This role is open to engineers with experience in machine learning, autonomous driving, or computer vision, as well as software engineers with experience in large-scale MLOps or data infrastructure development, or engineers who have worked on ML or software engineering in the robotics domain. Turing's mission is to develop a fully autonomous driving system. The overall autonomous driving system consists of various modules that communicate with each other via Pub/Sub messaging. While we keep dependencies as simple as possible, solving a wide range of software issues is essential for advancing autonomous driving model development and improving model accuracy. We are looking for engineers who can address software issues across multiple layers. Development challenges exist at many levels — from building ML model training pipelines and model quantization/optimization, to sensor data calibration and vehicle motion control implementation. ※ At the time of joining, we expect you to make an immediate impact by leveraging your existing strengths in a specific area, while gradually expanding your contributions across different layers over time. ■What you will work on ※ You will not be responsible for all of the following. You will focus on areas where your strengths shine, while also expanding into adjacent domains. ・Data calibration and coordinate transformation between different sensor devices ・Dataset creation and improvement ・Research, reproduction, and implementation of papers and existing implementations ・Evaluation of existing implementations using our proprietary datasets ・Model quantization and optimization ・On-vehicle model evaluation and experiment management ・Design and implementation of vehicle motion control systems and algorithms ・Evaluation and tuning of control performance using real vehicles ■Test your system and ML models in the real world Our development cycle: Build dataset & model → Drive test → Analyze experiment logs → Manage model. You will iterate on your models by experiencing them firsthand in a real vehicle. Use feedback from the physical world to drive your development forward. ■Who is thriving in this role Software engineers with development experience in the autonomous driving domain Engineers who have worked on software / control development at automotive companies Software engineers / control engineers from system development companies ML / software engineers from large-scale ad-tech companies

EN_3015_Simulation Engineer (End-to-End Autonomous Driving Model Development)

-About the role- ◆This role is open to engineers with expertise in the simulation domain.◆ Turing is a deep-tech startup on a mission to achieve full self-driving. To accelerate research and development of our End-to-End autonomous driving model, we are building next-generation simulation technology. Specifically, we are combining a 3D Gaussian Splatting-based simulator for evaluation and training, a large-scale distributed training infrastructure, and control systems (such as MPC) to enable efficient and robust autonomous driving development that is not dependent on real-vehicle testing. The Simulation Engineer in this role will lead the design and development of the closed-loop simulator that forms the foundation of this effort, working closely with ML engineers to accelerate the advancement of autonomous driving technology. ■What you will work on -Development of a 3DGS-based closed-loop simulator -Support for End-to-End autonomous driving model development -Integration with distributed training infrastructure ■Goal of simulator development We will build a high-fidelity simulation environment that leverages large-scale generated 3DGS scenes to enable training and evaluation of End-to-End models. We aim to integrate control layers and physical simulation to create an environment where control algorithm exploration can be conducted without relying on real-vehicle testing. ■What does "model development support" mean? We will support pretraining of E2E models suited for reinforcement learning and imitation learning, and drive efficient E2E model exploration including the control layer. ■Integration with distributed training infrastructure We will expand our simulation environment to support robust model training across diverse traffic scenarios and randomized physical properties. To make the simulation environment a shared infrastructure usable across the entire company, it needs to connect seamlessly with our existing model development data pipeline and training infrastructure. Rather than simply building a simulator, we aim to establish it as a company-wide common platform that supports model development across all teams.

EN_3016_Computer Vision Engineer (Image Processing)

-About the role- ◆This role is open to engineers with experience in machine learning, autonomous driving, or computer vision, as well as software engineers with experience in large-scale MLOps or data infrastructure development, or engineers who have worked on ML or low-level software engineering. Turing's mission is to develop a fully autonomous driving system. The overall autonomous driving system consists of various modules that communicate with each other via Pub/Sub messaging. While we keep dependencies as simple as possible, solving a wide range of software issues is essential for advancing autonomous driving model development and improving model accuracy. This position supports the autonomous driving AI data pipeline through image processing expertise. At Turing, where video and image data are the primary modality, knowledge of image encoding/decoding is a critical technical area that contributes to the entire machine learning pipeline. You will solve technical challenges that span multiple layers of the stack. ■What you will work on ※ You will not be responsible for all of the following. You will focus on areas where your strengths shine, while also expanding into adjacent domains. ・Data calibration and coordinate transformation between different sensor devices ・Acceleration of image and video data processing pipelines ・On-vehicle model evaluation and experiment management ■Test your system and ML models in the real world Our development cycle: Build dataset & model → Drive test → Analyze experiment logs → Manage model. You will iterate on your models by experiencing them firsthand in a real vehicle. Use feedback from the physical world to drive your development forward. ■Who is thriving in this role -Software engineers with development experience in the autonomous driving domain -Engineers with experience in image processing in edge environments such as cameras -Engineers who have worked on software / control development at automotive companies -Software engineers / control engineers from system development companies -ML / software engineers from large-scale ad-tech companies

EN_3018_Software Engineer (Web Evaluation Platform)

-About the role- ◆This role is open to web engineers with experience in web / SaaS development.◆ In this position, you will be responsible for developing the web dashboard and platform used to evaluate and analyze the performance of Turing's E2E (End-to-End) autonomous driving models. In the autonomous driving development process, the vast amounts of driving data and model inference logs from simulations need to be presented in a form that stakeholders — including engineers — can intuitively understand. The mission of this role is not simply to build an admin interface, but to construct a visualization platform that deeply integrates map data, spatial data, and multi-channel sensor logs to "dissect the intelligence of autonomous driving." ■What you will work on ・Design and development of high-quality web applications for stakeholders — featuring autonomous driving AI evaluation metrics (KPIs), intervention point logs, and synchronized playback of video and sensor data ・Visualization of spatial data and vehicle logs using Mapbox, Deck.gl, and similar tools ・Backend development for visualization services ・UI/UX proposals and implementation from the perspective of "how to present complex autonomous driving data in a way that accelerates decision-making"

EN_3030_Software Engineer (Training & Inference Optimization)

-About the role- ◆This role is open to engineers with a background in ML or low-level software engineering who have worked on training optimization and inference optimization. Turing is a deep-tech startup working to achieve full self-driving through multimodal generative AI. We are looking for engineers who can take on training and inference optimization challenges. To keep vehicles operating stably using Turing's autonomous driving technology, you will need to deeply understand the requirements of software layers including AI systems and autonomous driving software, and take ownership of hardware control. Your mission will be to achieve not only a wide range of functional development, but also high performance in terms of reliability and maintainability. ■Mission You are responsible for realizing inference with autonomous driving models. You will be required to leverage all of your knowledge across hardware, compilers, and machine learning model development to achieve stable inference execution within the resource-constrained in-vehicle environment. ■What you will work on ・Implementation and optimization for autonomous driving SoCs ・Acceleration of the overall autonomous driving model  - Automation of the ML pipeline from model training through deployment (MLflow, Airflow, etc.)  - Introduction of CI/CD and improvement of engineering productivity ・Conversion of PyTorch-trained models to ONNX format and operator implementation ・Bottleneck analysis and optimization of existing models ・Selection of appropriate model architectures and training methods for deployment on in-vehicle systems