About PhantomBuster
PhantomBuster is a web automation SaaS that allows businesses to grow faster. We enable thousands of companies to boost their growth by finding and connecting with their ideal customers seamlessly.
Founded in 2016, PhantomBuster developed a toolbox of over 120 flows (Phantoms) to help businesses scale their sales and marketing processes. We allow our users to automate finding and enriching data about their potential customers and leverage that data to connect with them.
Why this role exists
We are building agentic AI into the core of our product and need someone who can help us move faster — not by learning as they go, but by bringing real, hands-on experience with agentic systems from day one.
You will work alongside our current ML Engineer to set standards, build the frameworks others will follow, and advise engineering teams across the company on how to implement AI agents the right way. This is a greenfield opportunity: you will shape how we do things, not inherit someone else's playbook.
About the Role and the Team
You will join our Data Department to support the development of Phantom Intelligence, the platform that powers our product's AI capabilities.
The Data Department currently consists of two teams:
-
Analytics Team: composed of three Analytics Engineers
-
Machine Learning Team: composed of one Machine Learning Engineer.
Our AI stack runs on AWS Bedrock AgentCore, with agents built in Python 3.13 / LangChain / LangGraph, observed via Langfuse, and tested with an in-house evaluation framework. The data platform combines an operational PostgreSQL database, an AWS data lake, and a Snowflake data warehouse. Data and reporting are also available through self-service tools such as Amplitude, Tableau, Google Analytics, and ChartMogul.
What You’ll Do
-
Define and evolve our infrastructure to allow for better ML and AI capabilities, with a focus on LLM-based and agentic systems.
-
Contribute to the development and expansion of our agentic AI framework powered by AWS Bedrock, enabling both internal tools and customer-facing features.
-
Identify, source, and refine datasets to allow tuning models, powering retrieval pipelines, or expanding agentic workflows.
-
Pre-process data by using techniques such as data cleaning, feature engineering, and transformation.
-
Train, evaluate, and deploy both LLM-based systems and traditional machine learning models into production.
-
Monitor, debug, and continuously improve deployed models and AI tools.
-
Support machine learning usage throughout the company, including selecting the right modeling approach for the use case (LLM vs. traditional ML).
-
Support the integration and use of LLMs, including approaches such as fine-tuning, prompt tuning, and retrieval-augmented generation (RAG), to improve accuracy.
You might be a great fit if:
-
You have an analytical mindset.
-
You strive to understand business challenges and leverage ML and LLMs to solve them.
-
You have genuine curiosity about agentic AI — you've explored it because it excites you, not just because it's trending.
-
You are autonomous and rigorous. You have an ownership mindset: you define the path, challenge assumptions, and aren't afraid to break new ground.
-
You can explain complex AI concepts clearly — to engineers, to non-technical stakeholders, to anyone. If you can't articulate how something works, that's a signal.
-
You're brave enough to try things in a space where best practices are still being written.
-
You're resourceful - you might not have all the answers, but you are ready to find them.
-
You are a team player with high integrity - you can remain flexible as we grow.
Requirements
-
5+ years of experience as an ML Engineer, AI Engineer, or Software Engineer with a strong AI focus.
-
Hands-on experience building AI agents using frameworks such as LangChain, Amazon Bedrock AgentCore, or similar.
-
Strong understanding of LLM-based systems: prompt engineering, agent orchestration, tool use, and multi-agent workflows.
-
Familiarity with MCP (Model Context Protocol) and experience integrating agents with external APIs or data sources.
-
Experience working with Agents for Amazon Bedrock AgentCore or similar agent setups.
-
Strong understanding of machine learning algorithms, statistical methods, and data preprocessing techniques.
-
Experience with cloud platforms for model training and deployment, especially AWS.
-
Proficiency in Python, including LangChain, and standard data libraries (Pandas, NumPy, etc.).
-
Fluency in English.
Bonus points
-
Direct experience with conversational chat agents/sub-agents (LangChain/LangGraph, Pydantic structured outputs, tool calling) and shared evaluation infrastructure (DeepEval, Langfuse traces, cross-agent I/O contracts).
-
Background in MLOps: model monitoring, CI/CD pipelines, versioning (MLflow, Airflow).
-
Contributions to open-source ML or AI projects.
-
Experience in a SaaS B2B or product-led growth company.
What's in it for you?
-
Fully remote working environment (France, Spain, or Portugal).
-
Real ownership: you will define how agentic AI is built at PhantomBuster, not follow someone else's decisions.
-
Freedom to research and adopt new technologies as the space evolves & to make an impact at a small, self-funded, and profitable tech startup by laying the foundation for machine learning and AI.
-
Collaborative and open-minded culture based on rationality, humility, honesty, and long-term thinking.
-
Benefits and perks are described below.
Hiring Process
-
Screening with our Talent Acquisition Partner, Diane (30 min).
-
Job Fit interview with our Analytics Manager, Maren + a Data Team member (60 min).
-
Live technical exercise — a hands-on session with our technical team, exploring how you approach agentic AI problems in real time (60–90 min).
-
Culture fit interview with Nicolas, the CTO, and an other colleagues from other department (60 min).
AI Guidelines
At PhantomBuster, we use AI tools daily to build things faster. As the use of AI in recruitment might have multiple implications, we want to be transparent about how we might use it and how we expect you to use it during our recruitment processes.
How we use AI:
-
Draft and refine job descriptions and case studies
-
Draft emails during the process
-
Find interview timeslots
-
Summarize interview notes
How we don't use AI:
-
Assess your CV or profile
-
Evaluate interview performance
-
Conduct interviews
-
Grade technical tasks or case studies
You interact with humans. Period.
We invite you to use AI throughout the recruitment process. However, we want to meet YOU, not machine-generated responses. Your unique perspective matters so much more than perfect AI answers.
Feel free to use AI to:
-
Research our company, team, or product
-
Refine your CV, portfolio, or LinkedIn profile
-
Prepare for interviews and brainstorm potential questions
-
Polish your case study or presentation
-
Draft emails to us
Let us know throughout the process how you used AI— we're curious to learn.
Don't use AI to:
-
Search for answers during interviews (unless we ask)
-
Create documents (CV, portfolio, presentation) from scratch without your input
-
Build case studies or technical tests without your personal touch
If you have any questions → connect with us on [email protected]
