Questo sito utilizza cookie tecnici, analytics e di terze parti.
Proseguendo nella navigazione accetti l’utilizzo dei cookie.

Eventi>

Running LLM Agents Where Your Code Already Lives

(Mollie)
Lingua: Inglese
Orario: 19:00  -  19:15

Most agentic architectures push orchestration to an external platform. You usually need to expose your domain services as APIs, wire them together, manage another piece of infra and version the contracts between them. At Mollie, we went a different direction. We built mollie-agent, an internal Java library based on Spring AI that lets Product teams run tool-calling agents directly inside their services.

When the agent runs in-process, your existing @Service becomes a tool. No new APIs to build. No separate service to deploy, no HTTP contracts to version, no network hops to debug. Your transaction boundaries, security context, and observability carry over without extra wiring. Orchestration logic is just business logic and it belongs next to your repositories and domain models, not in a separate platform.

We'll ground the talk in Krawler as a Service (KaaS), a production agent that autonomously browses merchant websites and extracts structured data. Through KaaS, we'll walk through why we manage the ReAct loop ourselves instead of delegating to Spring AI, how we handle memory compression when browsing sessions accumulate entire HTML pages, and the security challenges of agents processing untrusted content: prompt injection from merchant websites, preventing the agent from acting outside its domain boundary, and script injection in page content.

We'll close with what's still hard: how we evaluate agents, and why getting consistent structured output out of an LLM is harder than it looks