Position: Founding Senior Full-Stack AI/ML Engineer (Full-Time, On-site/Hybrid) 

Location: Buffalo-Niagara region, NY (on-site; hybrid flexibility as needed) 

Compensation: Competitive salary (based on experience and local market) + equity options for ownership-minded candidates.  


Key Responsibilities

  1. End-to-End ML Development: Design, implement, train/fine-tune, and deploy AI/ML models to solve core product problems, especially in the realm of computer vision (CV), Reinforcement Learning (RL), and multimodal LLM/VLM/VLA. You will prototype algorithms, then train and optimize models on real and synthetic data (e.g., fine-tuning vision or vision-language models for our specific use case). This includes trying the latest research when applicable and evaluating model performance. 

  2. Data Pipeline & Model Lifecycle: Build and maintain the full ML pipeline: from data collection and preprocessing, through model training and validation, to deployment and monitoring in production. In practice, this means you’ll handle tasks like setting up data engineering workflows, developing training scripts, evaluating model performance, and optimizing inference services on cloud or edge devices. Expect to work across every facet of the ML lifecycle, ensuring our models move smoothly from experimentation to live usage. 

  3. Model Deployment & Full-Stack integration: As a “full-stack” ML engineer, deploy trained models into a production environment in collaboration with the team. This includes developing models serving APIs or services (e.g., a REST API or edge application to expose model inference), and handling the integration of the ML components with the rest of the software stack. In our case, that might mean packaging models (e.g., in Docker containers) and setting up cloud functions or microservices for inference. If models need to run on the edge (on-device), you’ll employ techniques like model quantization or use frameworks like Apple Metal/MLX, ONNX/TensorRT/TFLite to deploy on resource-constrained devices. 

  4. Infrastructure & DevOps for ML: Take ownership of the ML infrastructure – for example, manage small cloud footprints for training and testing (AWS, GCP, or other platforms). You’ll help set up and utilize CI/CD pipelines to automate model training/testing and deployment, ensure reproducibility of experiments, and monitor model performance in deployment. Essentially, you’ll wear a bit of a DevOps hat for our ML pipeline, making sure we can train, release, and update models smoothly. 

  5. Fast-Paced Collaboration & Wear Many Hats: Work closely with the founder(s) and any future team members to iterate quickly on the product. In an early startup, you will wear multiple hats, one day you might be tuning hyperparameters, another day refining a backend API or analyzing user feedback. Flexibility and initiative are key; you should be comfortable in an environment where priorities can shift as we learn. Importantly, you’ll contribute to technical discussions and decision-making, essentially acting as the in-house expert on all things AI/ML. We expect you to take ownership of projects and see them through from concept to production deployment. 

  6. Staying Cutting-Edge: Given our product domain, you will be expected to stay up-to-date with new research and techniques in AI/ML (e.g., new model architectures, tools, or frameworks that could give us an edge). Part of your role will be to bring innovative ideas or improvements into our R&D process. You’ll have freedom to propose and test out novel solutions that could improve accuracy, efficiency, or user experience. 


Qualifications & Skills 

  1. Legal Authorization to Work: Must be legally authorized to work in the U.S. without any restrictions at the time of hire. We are unable to offer visa sponsorship (e.g., H-1B, F-1 OPT extension, etc.) for this position. 

  2. Education & Experience: Bachelor’s or Master’s, Doctorate degree (MS and Ph.D. preferred) in AI, Computer Science/Engineering, Data Science/Engineering, or a related field (or equivalent hands-on experience). 5+ years of industry hands-on experience in machine learning, data science, or software engineering roles that involved developing and deploying AI/ML models. Proven experience taking ML projects from initial idea to deployment is a must. Startup experience is a big plus (we need someone who can operate with limited resources and ambiguous requirements). 

  3. Machine Learning & Deep Learning Expertise: Strong knowledge of ML algorithms and deep learning techniques. Experience training and fine-tuning models in areas like LLM/VLM/VLA, computer vision, reinforcement learning, or multimodal models with a deep understanding of the relevant knowledge. You should be comfortable reading research papers or OSS model release notes and implementing improvements. Experience with cutting-edge AI models (e.g., vision-language models, large language models, or segmentation/detection models) is ideal. Experience in distributed training and fine-tuning on cloud and HPC setups is preferred. 

  4. Programming & Frameworks: Proficiency in Python, C/C++, and common ML frameworks/libraries (such as PyTorch, TensorFlow, Numpy, Pandas, OpenCV, etc.). Solid software engineering practices, writing clean, efficient code, using version control (Git), and structuring projects for collaboration are required. Data engineering skills (SQL databases, data pipelines) are essential, as is understanding how to design systems that are efficient and maintainable. 

  5. Full-Stack Engineering Ability: While ML is the focus, this role is “full-stack” in the sense that you should be able to handle connecting the ML components with a product. Backend development skills are essential, e.g., experience building APIs or microservices (Python backends like Flask/FastAPI or Node.js) to serve model results. Comfort with databases and basic queries (for storing results or training data) is helpful. Some front-end or embedded/edge experience is a plus (e.g., if we develop a simple web demo, or deploy to a mobile app, you can assist in integration). Overall, you should be able to develop end-to-end solutions around the ML model, not just the model in isolation. 

  6. Cloud & MLOps/DevOps: Experience deploying applications or services to production on cloud platforms (AWS, GCP, or Azure) and edge devices. For example, you know how to spin up EC2 or GCE instances, use S3 or Cloud Storage, and possibly utilize container services (Docker, Kubernetes). You understand concepts such as containerization, serverless functions, and CI/CD pipelines for automated deployments. Specific experience with GPU cloud instances, model serving frameworks, or infrastructure-as-code tools will be valuable. If you have used ML pipeline tools or experiment tracking (TensorBoard, MLflow, etc.), that’s beneficial. We value an engineer who can not only build a model, but also own the operational side (monitoring performance, scaling deployments, etc.). We’ll be applying for and purchasing cloud credits, and sourcing other cloud/HPC/GPU resources from universities and other partnerships, and you’ll help make the best use of them, so an eye for cost-effective cloud usage is appreciated.

  7. Edge Computing Familiarity: (Nice to have) Knowledge and experience of deploying ML models to edge devices or constrained environments are strong pluses. For instance, experience with ONNX Runtime, NVIDIA Jetson, Intel OpenVINO, TensorRT, or converting models to run in real-time on mobile/embedded devices. Since part of our vision involves edge computing and edge intelligence, any prior work optimizing models for speed/memory (pruning, quantization) or working with streaming data from cameras/sensors will set you apart. In addition, familiarity with the open-sourced autopilot flight stacks such as PX4/ArduPilot is a plus. 

  8. Problem-Solving and Autonomy: Ability to independently drive projects and research solutions. As our first local engineer hire, you’ll often encounter open-ended problems. You should be resourceful and comfortable in making progress with minimal guidance, while also knowing when to seek feedback. A product-oriented mindset is essential: you care about how the tech will be used and can balance ideal technical solutions with practical timelines (e.g., knowing when a quick heuristic might serve as a placeholder until a model is perfected).

  9. Collaboration & Communication: Excellent communication skills and a team-oriented attitude. You’ll be working closely with the founding team (and any advisors or part-timers) to define requirements and iterate quickly. The ability to explain complex ML concepts in simple terms to non-experts is valued, as is the ability to document your work clearly. We foster a collaborative culture. We look for someone who is confident, yet humble, open to ideas, and enthusiastic about building something innovative as a team.


Work Culture & Benefits 

  1. Startup Environment: This is an on-site, high-collaboration role at our Buffalo, NY region office. In the early MVP stage, we believe working in-person together leads to the fastest iterations and strongest team culture. (We do offer flexibility for occasional remote work, e.g., due to personal needs or bad weather, but candidates should be prepared for a primarily in-office experience.) Expect an energizing incubator atmosphere, close collaboration, and the ability to influence all aspects of the product. We work hard, but we also have fun celebrating milestones and learning together. 

  2. Growth & Impact: You’ll be joining as a founding engineering team member, which means substantial ownership of your work and the opportunity to grow into a leadership position as the company scales. Your contributions will directly shape our product and can make a significant impact in our target industry. If you’ve ever wanted to experience taking an AI product from zero to one, this is your chance. Every day brings new challenges, and you’ll never be bored! 

  3. Compensation & Equity: We offer a competitive salary for our stage and locale, we don’t want compensation to be a barrier for the right candidate. Additionally, we provide a meaningful equity stake in the company; as an early team member, you’ll share the upside if we succeed. We also provide standard benefits (health insurance, PTO, etc., details to be discussed later) appropriate for a full-time role. 

  4. Equipment & Resources: We’ll equip you with top-notch hardware (a high-performance laptop or workstation of your choice) and any software or tools you need. Plus, you’ll have access to our cloud resources/credits for heavy compute needs. Campus Amenities & Environment, our office space is in a professional, modern incubator with all amenities (secure 24/7 access, free parking, kitchen, conference rooms, coffee, and more), all with the vibrancy of the University at Buffalo north campus nearby. 







Position: Founding Senior Full-Stack AI/ML Engineer (Full-Time, On-site/Hybrid) 

Location: Buffalo-Niagara region, NY (on-site; hybrid flexibility as needed) 

Compensation: Competitive salary (based on experience and local market) + equity options for ownership-minded candidates.  


Key Responsibilities

  1. End-to-End ML Development: Design, implement, train/fine-tune, and deploy AI/ML models to solve core product problems, especially in the realm of computer vision (CV), Reinforcement Learning (RL), and multimodal LLM/VLM/VLA. You will prototype algorithms, then train and optimize models on real and synthetic data (e.g., fine-tuning vision or vision-language models for our specific use case). This includes trying the latest research when applicable and evaluating model performance. 

  2. Data Pipeline & Model Lifecycle: Build and maintain the full ML pipeline: from data collection and preprocessing, through model training and validation, to deployment and monitoring in production. In practice, this means you’ll handle tasks like setting up data engineering workflows, developing training scripts, evaluating model performance, and optimizing inference services on cloud or edge devices. Expect to work across every facet of the ML lifecycle, ensuring our models move smoothly from experimentation to live usage. 

  3. Model Deployment & Full-Stack integration: As a “full-stack” ML engineer, deploy trained models into a production environment in collaboration with the team. This includes developing models serving APIs or services (e.g., a REST API or edge application to expose model inference), and handling the integration of the ML components with the rest of the software stack. In our case, that might mean packaging models (e.g., in Docker containers) and setting up cloud functions or microservices for inference. If models need to run on the edge (on-device), you’ll employ techniques like model quantization or use frameworks like Apple Metal/MLX, ONNX/TensorRT/TFLite to deploy on resource-constrained devices. 

  4. Infrastructure & DevOps for ML: Take ownership of the ML infrastructure – for example, manage small cloud footprints for training and testing (AWS, GCP, or other platforms). You’ll help set up and utilize CI/CD pipelines to automate model training/testing and deployment, ensure reproducibility of experiments, and monitor model performance in deployment. Essentially, you’ll wear a bit of a DevOps hat for our ML pipeline, making sure we can train, release, and update models smoothly. 

  5. Fast-Paced Collaboration & Wear Many Hats: Work closely with the founder(s) and any future team members to iterate quickly on the product. In an early startup, you will wear multiple hats, one day you might be tuning hyperparameters, another day refining a backend API or analyzing user feedback. Flexibility and initiative are key; you should be comfortable in an environment where priorities can shift as we learn. Importantly, you’ll contribute to technical discussions and decision-making, essentially acting as the in-house expert on all things AI/ML. We expect you to take ownership of projects and see them through from concept to production deployment. 

  6. Staying Cutting-Edge: Given our product domain, you will be expected to stay up-to-date with new research and techniques in AI/ML (e.g., new model architectures, tools, or frameworks that could give us an edge). Part of your role will be to bring innovative ideas or improvements into our R&D process. You’ll have freedom to propose and test out novel solutions that could improve accuracy, efficiency, or user experience. 


Qualifications & Skills 

  1. Legal Authorization to Work: Must be legally authorized to work in the U.S. without any restrictions at the time of hire. We are unable to offer visa sponsorship (e.g., H-1B, F-1 OPT extension, etc.) for this position. 

  2. Education & Experience: Bachelor’s or Master’s, Doctorate degree (MS and Ph.D. preferred) in AI, Computer Science/Engineering, Data Science/Engineering, or a related field (or equivalent hands-on experience). 5+ years of industry hands-on experience in machine learning, data science, or software engineering roles that involved developing and deploying AI/ML models. Proven experience taking ML projects from initial idea to deployment is a must. Startup experience is a big plus (we need someone who can operate with limited resources and ambiguous requirements). 

  3. Machine Learning & Deep Learning Expertise: Strong knowledge of ML algorithms and deep learning techniques. Experience training and fine-tuning models in areas like LLM/VLM/VLA, computer vision, reinforcement learning, or multimodal models with a deep understanding of the relevant knowledge. You should be comfortable reading research papers or OSS model release notes and implementing improvements. Experience with cutting-edge AI models (e.g., vision-language models, large language models, or segmentation/detection models) is ideal. Experience in distributed training and fine-tuning on cloud and HPC setups is preferred. 

  4. Programming & Frameworks: Proficiency in Python, C/C++, and common ML frameworks/libraries (such as PyTorch, TensorFlow, Numpy, Pandas, OpenCV, etc.). Solid software engineering practices, writing clean, efficient code, using version control (Git), and structuring projects for collaboration are required. Data engineering skills (SQL databases, data pipelines) are essential, as is understanding how to design systems that are efficient and maintainable. 

  5. Full-Stack Engineering Ability: While ML is the focus, this role is “full-stack” in the sense that you should be able to handle connecting the ML components with a product. Backend development skills are essential, e.g., experience building APIs or microservices (Python backends like Flask/FastAPI or Node.js) to serve model results. Comfort with databases and basic queries (for storing results or training data) is helpful. Some front-end or embedded/edge experience is a plus (e.g., if we develop a simple web demo, or deploy to a mobile app, you can assist in integration). Overall, you should be able to develop end-to-end solutions around the ML model, not just the model in isolation. 

  6. Cloud & MLOps/DevOps: Experience deploying applications or services to production on cloud platforms (AWS, GCP, or Azure) and edge devices. For example, you know how to spin up EC2 or GCE instances, use S3 or Cloud Storage, and possibly utilize container services (Docker, Kubernetes). You understand concepts such as containerization, serverless functions, and CI/CD pipelines for automated deployments. Specific experience with GPU cloud instances, model serving frameworks, or infrastructure-as-code tools will be valuable. If you have used ML pipeline tools or experiment tracking (TensorBoard, MLflow, etc.), that’s beneficial. We value an engineer who can not only build a model, but also own the operational side (monitoring performance, scaling deployments, etc.). We’ll be applying for and purchasing cloud credits, and sourcing other cloud/HPC/GPU resources from universities and other partnerships, and you’ll help make the best use of them, so an eye for cost-effective cloud usage is appreciated.

  7. Edge Computing Familiarity: (Nice to have) Knowledge and experience of deploying ML models to edge devices or constrained environments are strong pluses. For instance, experience with ONNX Runtime, NVIDIA Jetson, Intel OpenVINO, TensorRT, or converting models to run in real-time on mobile/embedded devices. Since part of our vision involves edge computing and edge intelligence, any prior work optimizing models for speed/memory (pruning, quantization) or working with streaming data from cameras/sensors will set you apart. In addition, familiarity with the open-sourced autopilot flight stacks such as PX4/ArduPilot is a plus. 

  8. Problem-Solving and Autonomy: Ability to independently drive projects and research solutions. As our first local engineer hire, you’ll often encounter open-ended problems. You should be resourceful and comfortable in making progress with minimal guidance, while also knowing when to seek feedback. A product-oriented mindset is essential: you care about how the tech will be used and can balance ideal technical solutions with practical timelines (e.g., knowing when a quick heuristic might serve as a placeholder until a model is perfected).

  9. Collaboration & Communication: Excellent communication skills and a team-oriented attitude. You’ll be working closely with the founding team (and any advisors or part-timers) to define requirements and iterate quickly. The ability to explain complex ML concepts in simple terms to non-experts is valued, as is the ability to document your work clearly. We foster a collaborative culture. We look for someone who is confident, yet humble, open to ideas, and enthusiastic about building something innovative as a team.


Work Culture & Benefits 

  1. Startup Environment: This is an on-site, high-collaboration role at our Buffalo, NY region office. In the early MVP stage, we believe working in-person together leads to the fastest iterations and strongest team culture. (We do offer flexibility for occasional remote work, e.g., due to personal needs or bad weather, but candidates should be prepared for a primarily in-office experience.) Expect an energizing incubator atmosphere, close collaboration, and the ability to influence all aspects of the product. We work hard, but we also have fun celebrating milestones and learning together. 

  2. Growth & Impact: You’ll be joining as a founding engineering team member, which means substantial ownership of your work and the opportunity to grow into a leadership position as the company scales. Your contributions will directly shape our product and can make a significant impact in our target industry. If you’ve ever wanted to experience taking an AI product from zero to one, this is your chance. Every day brings new challenges, and you’ll never be bored! 

  3. Compensation & Equity: We offer a competitive salary for our stage and locale, we don’t want compensation to be a barrier for the right candidate. Additionally, we provide a meaningful equity stake in the company; as an early team member, you’ll share the upside if we succeed. We also provide standard benefits (health insurance, PTO, etc., details to be discussed later) appropriate for a full-time role. 

  4. Equipment & Resources: We’ll equip you with top-notch hardware (a high-performance laptop or workstation of your choice) and any software or tools you need. Plus, you’ll have access to our cloud resources/credits for heavy compute needs. Campus Amenities & Environment, our office space is in a professional, modern incubator with all amenities (secure 24/7 access, free parking, kitchen, conference rooms, coffee, and more), all with the vibrancy of the University at Buffalo north campus nearby. 



Position: Founding Senior Full-Stack AI/ML Engineer (Full-Time, On-site/Hybrid) 

Location: Buffalo-Niagara region, NY (on-site; hybrid flexibility as needed) 

Compensation: Competitive salary (based on experience and local market) + equity options for ownership-minded candidates.  


Key Responsibilities

  1. End-to-End ML Development: Design, implement, train/fine-tune, and deploy AI/ML models to solve core product problems, especially in the realm of computer vision (CV), Reinforcement Learning (RL), and multimodal LLM/VLM/VLA. You will prototype algorithms, then train and optimize models on real and synthetic data (e.g., fine-tuning vision or vision-language models for our specific use case). This includes trying the latest research when applicable and evaluating model performance. 

  2. Data Pipeline & Model Lifecycle: Build and maintain the full ML pipeline: from data collection and preprocessing, through model training and validation, to deployment and monitoring in production. In practice, this means you’ll handle tasks like setting up data engineering workflows, developing training scripts, evaluating model performance, and optimizing inference services on cloud or edge devices. Expect to work across every facet of the ML lifecycle, ensuring our models move smoothly from experimentation to live usage. 

  3. Model Deployment & Full-Stack integration: As a “full-stack” ML engineer, deploy trained models into a production environment in collaboration with the team. This includes developing models serving APIs or services (e.g., a REST API or edge application to expose model inference), and handling the integration of the ML components with the rest of the software stack. In our case, that might mean packaging models (e.g., in Docker containers) and setting up cloud functions or microservices for inference. If models need to run on the edge (on-device), you’ll employ techniques like model quantization or use frameworks like Apple Metal/MLX, ONNX/TensorRT/TFLite to deploy on resource-constrained devices. 

  4. Infrastructure & DevOps for ML: Take ownership of the ML infrastructure – for example, manage small cloud footprints for training and testing (AWS, GCP, or other platforms). You’ll help set up and utilize CI/CD pipelines to automate model training/testing and deployment, ensure reproducibility of experiments, and monitor model performance in deployment. Essentially, you’ll wear a bit of a DevOps hat for our ML pipeline, making sure we can train, release, and update models smoothly. 

  5. Fast-Paced Collaboration & Wear Many Hats: Work closely with the founder(s) and any future team members to iterate quickly on the product. In an early startup, you will wear multiple hats, one day you might be tuning hyperparameters, another day refining a backend API or analyzing user feedback. Flexibility and initiative are key; you should be comfortable in an environment where priorities can shift as we learn. Importantly, you’ll contribute to technical discussions and decision-making, essentially acting as the in-house expert on all things AI/ML. We expect you to take ownership of projects and see them through from concept to production deployment. 

  6. Staying Cutting-Edge: Given our product domain, you will be expected to stay up-to-date with new research and techniques in AI/ML (e.g., new model architectures, tools, or frameworks that could give us an edge). Part of your role will be to bring innovative ideas or improvements into our R&D process. You’ll have freedom to propose and test out novel solutions that could improve accuracy, efficiency, or user experience. 


Qualifications & Skills 

  1. Legal Authorization to Work: Must be legally authorized to work in the U.S. without any restrictions at the time of hire. We are unable to offer visa sponsorship (e.g., H-1B, F-1 OPT extension, etc.) for this position. 

  2. Education & Experience: Bachelor’s or Master’s, Doctorate degree (MS and Ph.D. preferred) in AI, Computer Science/Engineering, Data Science/Engineering, or a related field (or equivalent hands-on experience). 5+ years of industry hands-on experience in machine learning, data science, or software engineering roles that involved developing and deploying AI/ML models. Proven experience taking ML projects from initial idea to deployment is a must. Startup experience is a big plus (we need someone who can operate with limited resources and ambiguous requirements). 

  3. Machine Learning & Deep Learning Expertise: Strong knowledge of ML algorithms and deep learning techniques. Experience training and fine-tuning models in areas like LLM/VLM/VLA, computer vision, reinforcement learning, or multimodal models with a deep understanding of the relevant knowledge. You should be comfortable reading research papers or OSS model release notes and implementing improvements. Experience with cutting-edge AI models (e.g., vision-language models, large language models, or segmentation/detection models) is ideal. Experience in distributed training and fine-tuning on cloud and HPC setups is preferred. 

  4. Programming & Frameworks: Proficiency in Python, C/C++, and common ML frameworks/libraries (such as PyTorch, TensorFlow, Numpy, Pandas, OpenCV, etc.). Solid software engineering practices, writing clean, efficient code, using version control (Git), and structuring projects for collaboration are required. Data engineering skills (SQL databases, data pipelines) are essential, as is understanding how to design systems that are efficient and maintainable. 

  5. Full-Stack Engineering Ability: While ML is the focus, this role is “full-stack” in the sense that you should be able to handle connecting the ML components with a product. Backend development skills are essential, e.g., experience building APIs or microservices (Python backends like Flask/FastAPI or Node.js) to serve model results. Comfort with databases and basic queries (for storing results or training data) is helpful. Some front-end or embedded/edge experience is a plus (e.g., if we develop a simple web demo, or deploy to a mobile app, you can assist in integration). Overall, you should be able to develop end-to-end solutions around the ML model, not just the model in isolation. 

  6. Cloud & MLOps/DevOps: Experience deploying applications or services to production on cloud platforms (AWS, GCP, or Azure) and edge devices. For example, you know how to spin up EC2 or GCE instances, use S3 or Cloud Storage, and possibly utilize container services (Docker, Kubernetes). You understand concepts such as containerization, serverless functions, and CI/CD pipelines for automated deployments. Specific experience with GPU cloud instances, model serving frameworks, or infrastructure-as-code tools will be valuable. If you have used ML pipeline tools or experiment tracking (TensorBoard, MLflow, etc.), that’s beneficial. We value an engineer who can not only build a model, but also own the operational side (monitoring performance, scaling deployments, etc.). We’ll be applying for and purchasing cloud credits, and sourcing other cloud/HPC/GPU resources from universities and other partnerships, and you’ll help make the best use of them, so an eye for cost-effective cloud usage is appreciated.

  7. Edge Computing Familiarity: (Nice to have) Knowledge and experience of deploying ML models to edge devices or constrained environments are strong pluses. For instance, experience with ONNX Runtime, NVIDIA Jetson, Intel OpenVINO, TensorRT, or converting models to run in real-time on mobile/embedded devices. Since part of our vision involves edge computing and edge intelligence, any prior work optimizing models for speed/memory (pruning, quantization) or working with streaming data from cameras/sensors will set you apart. In addition, familiarity with the open-sourced autopilot flight stacks such as PX4/ArduPilot is a plus. 

  8. Problem-Solving and Autonomy: Ability to independently drive projects and research solutions. As our first local engineer hire, you’ll often encounter open-ended problems. You should be resourceful and comfortable in making progress with minimal guidance, while also knowing when to seek feedback. A product-oriented mindset is essential: you care about how the tech will be used and can balance ideal technical solutions with practical timelines (e.g., knowing when a quick heuristic might serve as a placeholder until a model is perfected).

  9. Collaboration & Communication: Excellent communication skills and a team-oriented attitude. You’ll be working closely with the founding team (and any advisors or part-timers) to define requirements and iterate quickly. The ability to explain complex ML concepts in simple terms to non-experts is valued, as is the ability to document your work clearly. We foster a collaborative culture. We look for someone who is confident, yet humble, open to ideas, and enthusiastic about building something innovative as a team.


Work Culture & Benefits 

  1. Startup Environment: This is an on-site, high-collaboration role at our Buffalo, NY region office. In the early MVP stage, we believe working in-person together leads to the fastest iterations and strongest team culture. (We do offer flexibility for occasional remote work, e.g., due to personal needs or bad weather, but candidates should be prepared for a primarily in-office experience.) Expect an energizing incubator atmosphere, close collaboration, and the ability to influence all aspects of the product. We work hard, but we also have fun celebrating milestones and learning together. 

  2. Growth & Impact: You’ll be joining as a founding engineering team member, which means substantial ownership of your work and the opportunity to grow into a leadership position as the company scales. Your contributions will directly shape our product and can make a significant impact in our target industry. If you’ve ever wanted to experience taking an AI product from zero to one, this is your chance. Every day brings new challenges, and you’ll never be bored! 

  3. Compensation & Equity: We offer a competitive salary for our stage and locale, we don’t want compensation to be a barrier for the right candidate. Additionally, we provide a meaningful equity stake in the company; as an early team member, you’ll share the upside if we succeed. We also provide standard benefits (health insurance, PTO, etc., details to be discussed later) appropriate for a full-time role. 

  4. Equipment & Resources: We’ll equip you with top-notch hardware (a high-performance laptop or workstation of your choice) and any software or tools you need. Plus, you’ll have access to our cloud resources/credits for heavy compute needs. Campus Amenities & Environment, our office space is in a professional, modern incubator with all amenities (secure 24/7 access, free parking, kitchen, conference rooms, coffee, and more), all with the vibrancy of the University at Buffalo north campus nearby. 







Position: Founding Senior Full-Stack AI/ML Engineer (Full-Time, On-site/Hybrid) 

Location: Buffalo-Niagara region, NY (on-site; hybrid flexibility as needed) 

Compensation: Competitive salary (based on experience and local market) + equity options for ownership-minded candidates.  


Key Responsibilities

  1. End-to-End ML Development: Design, implement, train/fine-tune, and deploy AI/ML models to solve core product problems, especially in the realm of computer vision (CV), Reinforcement Learning (RL), and multimodal LLM/VLM/VLA. You will prototype algorithms, then train and optimize models on real and synthetic data (e.g., fine-tuning vision or vision-language models for our specific use case). This includes trying the latest research when applicable and evaluating model performance. 

  2. Data Pipeline & Model Lifecycle: Build and maintain the full ML pipeline: from data collection and preprocessing, through model training and validation, to deployment and monitoring in production. In practice, this means you’ll handle tasks like setting up data engineering workflows, developing training scripts, evaluating model performance, and optimizing inference services on cloud or edge devices. Expect to work across every facet of the ML lifecycle, ensuring our models move smoothly from experimentation to live usage. 

  3. Model Deployment & Full-Stack integration: As a “full-stack” ML engineer, deploy trained models into a production environment in collaboration with the team. This includes developing models serving APIs or services (e.g., a REST API or edge application to expose model inference), and handling the integration of the ML components with the rest of the software stack. In our case, that might mean packaging models (e.g., in Docker containers) and setting up cloud functions or microservices for inference. If models need to run on the edge (on-device), you’ll employ techniques like model quantization or use frameworks like Apple Metal/MLX, ONNX/TensorRT/TFLite to deploy on resource-constrained devices. 

  4. Infrastructure & DevOps for ML: Take ownership of the ML infrastructure – for example, manage small cloud footprints for training and testing (AWS, GCP, or other platforms). You’ll help set up and utilize CI/CD pipelines to automate model training/testing and deployment, ensure reproducibility of experiments, and monitor model performance in deployment. Essentially, you’ll wear a bit of a DevOps hat for our ML pipeline, making sure we can train, release, and update models smoothly. 

  5. Fast-Paced Collaboration & Wear Many Hats: Work closely with the founder(s) and any future team members to iterate quickly on the product. In an early startup, you will wear multiple hats, one day you might be tuning hyperparameters, another day refining a backend API or analyzing user feedback. Flexibility and initiative are key; you should be comfortable in an environment where priorities can shift as we learn. Importantly, you’ll contribute to technical discussions and decision-making, essentially acting as the in-house expert on all things AI/ML. We expect you to take ownership of projects and see them through from concept to production deployment. 

  6. Staying Cutting-Edge: Given our product domain, you will be expected to stay up-to-date with new research and techniques in AI/ML (e.g., new model architectures, tools, or frameworks that could give us an edge). Part of your role will be to bring innovative ideas or improvements into our R&D process. You’ll have freedom to propose and test out novel solutions that could improve accuracy, efficiency, or user experience. 


Qualifications & Skills 

  1. Legal Authorization to Work: Must be legally authorized to work in the U.S. without any restrictions at the time of hire. We are unable to offer visa sponsorship (e.g., H-1B, F-1 OPT extension, etc.) for this position. 

  2. Education & Experience: Bachelor’s or Master’s, Doctorate degree (MS and Ph.D. preferred) in AI, Computer Science/Engineering, Data Science/Engineering, or a related field (or equivalent hands-on experience). 5+ years of industry hands-on experience in machine learning, data science, or software engineering roles that involved developing and deploying AI/ML models. Proven experience taking ML projects from initial idea to deployment is a must. Startup experience is a big plus (we need someone who can operate with limited resources and ambiguous requirements). 

  3. Machine Learning & Deep Learning Expertise: Strong knowledge of ML algorithms and deep learning techniques. Experience training and fine-tuning models in areas like LLM/VLM/VLA, computer vision, reinforcement learning, or multimodal models with a deep understanding of the relevant knowledge. You should be comfortable reading research papers or OSS model release notes and implementing improvements. Experience with cutting-edge AI models (e.g., vision-language models, large language models, or segmentation/detection models) is ideal. Experience in distributed training and fine-tuning on cloud and HPC setups is preferred. 

  4. Programming & Frameworks: Proficiency in Python, C/C++, and common ML frameworks/libraries (such as PyTorch, TensorFlow, Numpy, Pandas, OpenCV, etc.). Solid software engineering practices, writing clean, efficient code, using version control (Git), and structuring projects for collaboration are required. Data engineering skills (SQL databases, data pipelines) are essential, as is understanding how to design systems that are efficient and maintainable. 

  5. Full-Stack Engineering Ability: While ML is the focus, this role is “full-stack” in the sense that you should be able to handle connecting the ML components with a product. Backend development skills are essential, e.g., experience building APIs or microservices (Python backends like Flask/FastAPI or Node.js) to serve model results. Comfort with databases and basic queries (for storing results or training data) is helpful. Some front-end or embedded/edge experience is a plus (e.g., if we develop a simple web demo, or deploy to a mobile app, you can assist in integration). Overall, you should be able to develop end-to-end solutions around the ML model, not just the model in isolation. 

  6. Cloud & MLOps/DevOps: Experience deploying applications or services to production on cloud platforms (AWS, GCP, or Azure) and edge devices. For example, you know how to spin up EC2 or GCE instances, use S3 or Cloud Storage, and possibly utilize container services (Docker, Kubernetes). You understand concepts such as containerization, serverless functions, and CI/CD pipelines for automated deployments. Specific experience with GPU cloud instances, model serving frameworks, or infrastructure-as-code tools will be valuable. If you have used ML pipeline tools or experiment tracking (TensorBoard, MLflow, etc.), that’s beneficial. We value an engineer who can not only build a model, but also own the operational side (monitoring performance, scaling deployments, etc.). We’ll be applying for and purchasing cloud credits, and sourcing other cloud/HPC/GPU resources from universities and other partnerships, and you’ll help make the best use of them, so an eye for cost-effective cloud usage is appreciated.

  7. Edge Computing Familiarity: (Nice to have) Knowledge and experience of deploying ML models to edge devices or constrained environments are strong pluses. For instance, experience with ONNX Runtime, NVIDIA Jetson, Intel OpenVINO, TensorRT, or converting models to run in real-time on mobile/embedded devices. Since part of our vision involves edge computing and edge intelligence, any prior work optimizing models for speed/memory (pruning, quantization) or working with streaming data from cameras/sensors will set you apart. In addition, familiarity with the open-sourced autopilot flight stacks such as PX4/ArduPilot is a plus. 

  8. Problem-Solving and Autonomy: Ability to independently drive projects and research solutions. As our first local engineer hire, you’ll often encounter open-ended problems. You should be resourceful and comfortable in making progress with minimal guidance, while also knowing when to seek feedback. A product-oriented mindset is essential: you care about how the tech will be used and can balance ideal technical solutions with practical timelines (e.g., knowing when a quick heuristic might serve as a placeholder until a model is perfected).

  9. Collaboration & Communication: Excellent communication skills and a team-oriented attitude. You’ll be working closely with the founding team (and any advisors or part-timers) to define requirements and iterate quickly. The ability to explain complex ML concepts in simple terms to non-experts is valued, as is the ability to document your work clearly. We foster a collaborative culture. We look for someone who is confident, yet humble, open to ideas, and enthusiastic about building something innovative as a team.


Work Culture & Benefits 

  1. Startup Environment: This is an on-site, high-collaboration role at our Buffalo, NY region office. In the early MVP stage, we believe working in-person together leads to the fastest iterations and strongest team culture. (We do offer flexibility for occasional remote work, e.g., due to personal needs or bad weather, but candidates should be prepared for a primarily in-office experience.) Expect an energizing incubator atmosphere, close collaboration, and the ability to influence all aspects of the product. We work hard, but we also have fun celebrating milestones and learning together. 

  2. Growth & Impact: You’ll be joining as a founding engineering team member, which means substantial ownership of your work and the opportunity to grow into a leadership position as the company scales. Your contributions will directly shape our product and can make a significant impact in our target industry. If you’ve ever wanted to experience taking an AI product from zero to one, this is your chance. Every day brings new challenges, and you’ll never be bored! 

  3. Compensation & Equity: We offer a competitive salary for our stage and locale, we don’t want compensation to be a barrier for the right candidate. Additionally, we provide a meaningful equity stake in the company; as an early team member, you’ll share the upside if we succeed. We also provide standard benefits (health insurance, PTO, etc., details to be discussed later) appropriate for a full-time role. 

  4. Equipment & Resources: We’ll equip you with top-notch hardware (a high-performance laptop or workstation of your choice) and any software or tools you need. Plus, you’ll have access to our cloud resources/credits for heavy compute needs. Campus Amenities & Environment, our office space is in a professional, modern incubator with all amenities (secure 24/7 access, free parking, kitchen, conference rooms, coffee, and more), all with the vibrancy of the University at Buffalo north campus nearby. 







Position: Founding Senior Full-Stack AI/ML Engineer (Full-Time, On-site/Hybrid) 

Location: Buffalo-Niagara region, NY (on-site; hybrid flexibility as needed) 

Compensation: Competitive salary (based on experience and local market) + equity options for ownership-minded candidates.  


Key Responsibilities

  1. End-to-End ML Development: Design, implement, train/fine-tune, and deploy AI/ML models to solve core product problems, especially in the realm of computer vision (CV), Reinforcement Learning (RL), and multimodal LLM/VLM/VLA. You will prototype algorithms, then train and optimize models on real and synthetic data (e.g., fine-tuning vision or vision-language models for our specific use case). This includes trying the latest research when applicable and evaluating model performance. 

  2. Data Pipeline & Model Lifecycle: Build and maintain the full ML pipeline: from data collection and preprocessing, through model training and validation, to deployment and monitoring in production. In practice, this means you’ll handle tasks like setting up data engineering workflows, developing training scripts, evaluating model performance, and optimizing inference services on cloud or edge devices. Expect to work across every facet of the ML lifecycle, ensuring our models move smoothly from experimentation to live usage. 

  3. Model Deployment & Full-Stack integration: As a “full-stack” ML engineer, deploy trained models into a production environment in collaboration with the team. This includes developing models serving APIs or services (e.g., a REST API or edge application to expose model inference), and handling the integration of the ML components with the rest of the software stack. In our case, that might mean packaging models (e.g., in Docker containers) and setting up cloud functions or microservices for inference. If models need to run on the edge (on-device), you’ll employ techniques like model quantization or use frameworks like Apple Metal/MLX, ONNX/TensorRT/TFLite to deploy on resource-constrained devices. 

  4. Infrastructure & DevOps for ML: Take ownership of the ML infrastructure – for example, manage small cloud footprints for training and testing (AWS, GCP, or other platforms). You’ll help set up and utilize CI/CD pipelines to automate model training/testing and deployment, ensure reproducibility of experiments, and monitor model performance in deployment. Essentially, you’ll wear a bit of a DevOps hat for our ML pipeline, making sure we can train, release, and update models smoothly. 

  5. Fast-Paced Collaboration & Wear Many Hats: Work closely with the founder(s) and any future team members to iterate quickly on the product. In an early startup, you will wear multiple hats, one day you might be tuning hyperparameters, another day refining a backend API or analyzing user feedback. Flexibility and initiative are key; you should be comfortable in an environment where priorities can shift as we learn. Importantly, you’ll contribute to technical discussions and decision-making, essentially acting as the in-house expert on all things AI/ML. We expect you to take ownership of projects and see them through from concept to production deployment. 

  6. Staying Cutting-Edge: Given our product domain, you will be expected to stay up-to-date with new research and techniques in AI/ML (e.g., new model architectures, tools, or frameworks that could give us an edge). Part of your role will be to bring innovative ideas or improvements into our R&D process. You’ll have freedom to propose and test out novel solutions that could improve accuracy, efficiency, or user experience. 


Qualifications & Skills 

  1. Legal Authorization to Work: Must be legally authorized to work in the U.S. without any restrictions at the time of hire. We are unable to offer visa sponsorship (e.g., H-1B, F-1 OPT extension, etc.) for this position. 

  2. Education & Experience: Bachelor’s or Master’s, Doctorate degree (MS and Ph.D. preferred) in AI, Computer Science/Engineering, Data Science/Engineering, or a related field (or equivalent hands-on experience). 5+ years of industry hands-on experience in machine learning, data science, or software engineering roles that involved developing and deploying AI/ML models. Proven experience taking ML projects from initial idea to deployment is a must. Startup experience is a big plus (we need someone who can operate with limited resources and ambiguous requirements). 

  3. Machine Learning & Deep Learning Expertise: Strong knowledge of ML algorithms and deep learning techniques. Experience training and fine-tuning models in areas like LLM/VLM/VLA, computer vision, reinforcement learning, or multimodal models with a deep understanding of the relevant knowledge. You should be comfortable reading research papers or OSS model release notes and implementing improvements. Experience with cutting-edge AI models (e.g., vision-language models, large language models, or segmentation/detection models) is ideal. Experience in distributed training and fine-tuning on cloud and HPC setups is preferred. 

  4. Programming & Frameworks: Proficiency in Python, C/C++, and common ML frameworks/libraries (such as PyTorch, TensorFlow, Numpy, Pandas, OpenCV, etc.). Solid software engineering practices, writing clean, efficient code, using version control (Git), and structuring projects for collaboration are required. Data engineering skills (SQL databases, data pipelines) are essential, as is understanding how to design systems that are efficient and maintainable. 

  5. Full-Stack Engineering Ability: While ML is the focus, this role is “full-stack” in the sense that you should be able to handle connecting the ML components with a product. Backend development skills are essential, e.g., experience building APIs or microservices (Python backends like Flask/FastAPI or Node.js) to serve model results. Comfort with databases and basic queries (for storing results or training data) is helpful. Some front-end or embedded/edge experience is a plus (e.g., if we develop a simple web demo, or deploy to a mobile app, you can assist in integration). Overall, you should be able to develop end-to-end solutions around the ML model, not just the model in isolation. 

  6. Cloud & MLOps/DevOps: Experience deploying applications or services to production on cloud platforms (AWS, GCP, or Azure) and edge devices. For example, you know how to spin up EC2 or GCE instances, use S3 or Cloud Storage, and possibly utilize container services (Docker, Kubernetes). You understand concepts such as containerization, serverless functions, and CI/CD pipelines for automated deployments. Specific experience with GPU cloud instances, model serving frameworks, or infrastructure-as-code tools will be valuable. If you have used ML pipeline tools or experiment tracking (TensorBoard, MLflow, etc.), that’s beneficial. We value an engineer who can not only build a model, but also own the operational side (monitoring performance, scaling deployments, etc.). We’ll be applying for and purchasing cloud credits, and sourcing other cloud/HPC/GPU resources from universities and other partnerships, and you’ll help make the best use of them, so an eye for cost-effective cloud usage is appreciated.

  7. Edge Computing Familiarity: (Nice to have) Knowledge and experience of deploying ML models to edge devices or constrained environments are strong pluses. For instance, experience with ONNX Runtime, NVIDIA Jetson, Intel OpenVINO, TensorRT, or converting models to run in real-time on mobile/embedded devices. Since part of our vision involves edge computing and edge intelligence, any prior work optimizing models for speed/memory (pruning, quantization) or working with streaming data from cameras/sensors will set you apart. In addition, familiarity with the open-sourced autopilot flight stacks such as PX4/ArduPilot is a plus. 

  8. Problem-Solving and Autonomy: Ability to independently drive projects and research solutions. As our first local engineer hire, you’ll often encounter open-ended problems. You should be resourceful and comfortable in making progress with minimal guidance, while also knowing when to seek feedback. A product-oriented mindset is essential: you care about how the tech will be used and can balance ideal technical solutions with practical timelines (e.g., knowing when a quick heuristic might serve as a placeholder until a model is perfected).

  9. Collaboration & Communication: Excellent communication skills and a team-oriented attitude. You’ll be working closely with the founding team (and any advisors or part-timers) to define requirements and iterate quickly. The ability to explain complex ML concepts in simple terms to non-experts is valued, as is the ability to document your work clearly. We foster a collaborative culture. We look for someone who is confident, yet humble, open to ideas, and enthusiastic about building something innovative as a team.


Work Culture & Benefits 

  1. Startup Environment: This is an on-site, high-collaboration role at our Buffalo, NY region office. In the early MVP stage, we believe working in-person together leads to the fastest iterations and strongest team culture. (We do offer flexibility for occasional remote work, e.g., due to personal needs or bad weather, but candidates should be prepared for a primarily in-office experience.) Expect an energizing incubator atmosphere, close collaboration, and the ability to influence all aspects of the product. We work hard, but we also have fun celebrating milestones and learning together. 

  2. Growth & Impact: You’ll be joining as a founding engineering team member, which means substantial ownership of your work and the opportunity to grow into a leadership position as the company scales. Your contributions will directly shape our product and can make a significant impact in our target industry. If you’ve ever wanted to experience taking an AI product from zero to one, this is your chance. Every day brings new challenges, and you’ll never be bored! 

  3. Compensation & Equity: We offer a competitive salary for our stage and locale, we don’t want compensation to be a barrier for the right candidate. Additionally, we provide a meaningful equity stake in the company; as an early team member, you’ll share the upside if we succeed. We also provide standard benefits (health insurance, PTO, etc., details to be discussed later) appropriate for a full-time role. 

  4. Equipment & Resources: We’ll equip you with top-notch hardware (a high-performance laptop or workstation of your choice) and any software or tools you need. Plus, you’ll have access to our cloud resources/credits for heavy compute needs. Campus Amenities & Environment, our office space is in a professional, modern incubator with all amenities (secure 24/7 access, free parking, kitchen, conference rooms, coffee, and more), all with the vibrancy of the University at Buffalo north campus nearby. 



© 2025 Kuiper Lab. All rights reserved.

Boston, MA , Buffalo, NY

info@kuiperlab.ai

© 2025 Kuiper Lab. All rights reserved.

Boston, MA , Buffalo, NY

info@kuiperlab.ai

© 2025 Kuiper Lab. All rights reserved.

Boston, MA , Buffalo, NY

info@kuiperlab.ai