Gloster–Minero IT Hungary Ltd. provides innovative consulting and software development services, supporting some of Europe’s leading automotive manufacturers while also playing an active role in major healthtech projects. Our solutions enhance the efficiency of manufacturing, logistics, and healthcare processes, and support our clients’ long-term digital growth. We are committed to building reliable, scalable, and high-value IT systems. All of this is delivered within an inspiring environment that fosters professional development and continuous learning.
Join us as an LLM Engineer to work at the forefront of large language model (LLM) innovation. We seek someone who combines strong AI theory and hands-on ML skills to fine-tune, deploy, and optimize LLMs for real-world applications. This role is ideal for those who want to build and own self-hosted, production-grade LLM and Retrieval-Augmented Generation (RAG) solutions—going beyond prompt engineering and plug-and-play APIs.
RESPONSIBILITIES
- Fine-tune pre-trained LLMs (e.g., Llama, Mistral, T5, GPT) using modern ML techniques for domain-specific use cases.
- Deploy and optimize models locally with frameworks like Ollama or vLLM, balancing performance and efficiency for in-house needs.
- Build and maintain self-hosted RAG pipelines using vector databases (FAISS, Pinecone or similar) to add contextual knowledge and improve model outputs.
- Select and tune models (pruning, quantization, parameter adjustments) based on project demands and hardware constraints.
- Lead infrastructure setup for secure inference, data privacy, and operational ML best practices (Docker, minimal cloud when necessary).
- Document and present your work clearly while collaborating with other technical and non-technical team members.
- Bring your portfolio of homegrown, practical AI/ML projects that show more than just tutorial-following—true initiative is highly valued.
REQUIREMENTS
- 2–4 years of direct experience with ML and recent exposure to LLMs (not senior).
- Solid theoretical grasp of AI/ML: neural networks, transformers, supervised/unsupervised learning, core math/statistics.
- Strong Python skills; familiarity with PyTorch or TensorFlow and Hugging Face Transformers.
- Demonstrated hands-on work: local LLM hosting, RAG builds, fine-tuning, and infrastructure—not just prompt and API integration.
- Evidence of self-motivated ML tinkering, such as public GitHub repos, blogs, or AI side projects.
- Clear communicator, eager to learn and help others understand foundations as well as tech.
WHAT WE ARE NOT LOOKING FOR
- Pure researchers, power users of SaaS AI tools, or candidates with only prompt engineering/API integration experience.
- Applicants without practical model tuning, hosting, or infrastructure experience.
WHAT WE OFFER
- Work with true self-hosted AI infrastructure.
- Freedom to experiment and grow within a collaborative environment.
- The chance to “own” LLM systems from model to deployment—making a measurable impact.
Apply with your CV and a brief portfolio (preferably GitHub), highlighting your most relevant AI/ML/LLM projects.