About the Project
We are developing a fine-tuning automation platform for an AI team, streamlining model adaptation, data collection, and evaluation. The PoC phase focuses on fine-tuning OpenAI’s Whisper model for Arabic transcription, validating performance improvements, and ensuring on-premises deployment feasibility.
Phase: Proof of Concept
Engagement: Short-Term (1.5 months, with potential extension)
Responsibilities:
- Model Fine-Tuning;
- Data Collection & Preprocessing;
- Model Evaluation & Benchmarking;
- Deployment & Infrastructure.
Requirements:
- 4+ years of commercial experience;
- Proficiency in Python, PyTorch, and Hugging Face Transformers;
- Good understanding of LoRA/QLoRA fine-tuning techniques (Preferably hands-on expertise);
- Strong background in data preprocessing and augmentation (preferably for speech recognition);
- Ability to work independently within a short PoC timeframe;
- Upper-Intermediate level of English and higher.
Nice to have:
- Experience in Speech-to-Text (STT) models (preferably Whisper or similar);
- Experience with WER, CER, and BLEU score evaluations;
- Familiarity with Gradio, NeptunAI, and on-premises ML deployments;
- Prior experience in automating fine-tuning pipelines;
- Familiarity with distributed training and optimization techniques.
Our benefits:
- Professional and career growth promotion;
- Competitive salary;
- Flexible working hours;
- Regular technical training at our office.