Bakı,
Azərbaycan
10/12/2025
-
17/12/2025
İş haqqında məlumat
- Implement features, fixes and small modules in Python for processing large time-series and event datasets.
- Contribute to internal simulation / analytics tools using Python (pandas, numpy, etc.).
- Develop and maintain simple integrations with internal and external APIs and data providers.
- Write clean, readable, well-documented code and basic tests for the parts you own.
- Collaborate closely with the founder and a junior analyst, participate in code reviews and planning, and learn from feedback.
How We Work & What We Offer:
- High ownership & growth: You will start by owning well-defined components and tasks, and gradually take more responsibility within the core platform.
- Direct impact: You work directly with the founder; there is no long hierarchy between decisions and implementation.
- Lean, focused team: Minimal bureaucracy, clear priorities and room to shape how we build things.
- Learning & growth: Strong exposure to real-world data pipelines, Python engineering and basic DevOps/infrastructure, with mentorship and guidance.
- Office & perks: Centrally located and well-equipped office (game & entertainment room, napping beds etc.).
- Food & comfort: Lunch, drinks, sweets and fruits are all provided by the company.
- Health: Private health insurance.
Working hours:
- You’ll be working 5 days a week from 10 a.m. to 7 p.m. (13:00-14:00 lunch time). Four days will be based in the office, and one day (Friday) you can work from home.
Salary:
- Competitive and negotiable based on experience and knowledge, including a performance-based bonus package.
Tələblər
- At least 1 year of hands-on experience working on data-intensive or backend systems in Python
- Good understanding of Python fundamentals and core data structures (lists, dicts, sets, classes, etc.).
- Some experience using pandas and numpy for working with datasets (CSV, Excel, time-series).
- Comfortable working with API design, integrations, and internal/external service communication
- Experience with relational database (e.g. MySQL/PostgreSQL)
- Familiarity with Git/GitHub and collaborative development (branches, pull requests, code review).
- Basic familiarity with Linux-based environments and Docker, or a strong motivation to learn them on the job.
- Ability to learn fast, ask questions, and take ownership of assigned tasks.
- At least technical English, ability to read and understand technical documentation. Verbal communication and written skills is a plus.
Nice to Have:
- Personal or academic projects in data / analytics / time-series (GitHub, Kaggle or similar).
- Exposure to cloud environments (AWS / GCP / Azure)
- Experience with non-relation databases
- Background (education or online courses) in statistics, time-series analysis, or analytics systems.
Application Guidelines:
Please submit your application through our Career Page link in the Apply for job button.
In the Cover Letter section, make sure to include:
- A brief note describing relevant projects you have worked on (university, internships, or personal projects).
- (If available) links to your GitHub, portfolio, or any other supporting materials.
We look forward to reviewing your application!
Please be advised that only shortlisted candidates will be invited to the further stages of the recruitment process.