Resume Score
CV/Résumé Score
  • Expertini Resume Scoring: See how well your CV/Résumé matches this job: Data Dynamics Artificial Intelligence/Machine Learning Engineer/Architect Python.
Pune Jobs Expertini

Urgent! Data Dynamics - Artificial Intelligence/Machine Learning Engineer/Architect - Python Job | Data Dynamics

Data Dynamics Artificial Intelligence/Machine Learning Engineer/Architect Python



Job description

<p><b>Description : </b><br/><br/>AI/ML Developer/Architect<br/><br/>Location : Pune (Onsite)<br/><br/>Experience : 7+ years<br/><br/>Role : AI/ML Developer/Architect<br/><br/><b>Job Summary : </b><br/><br/>We are looking for a highly skilled AI/ML Developer/Architect with 7+ years of experience in designing, developing, and deploying Machine Learning (ML) and Artificial Intelligence (AI) solutions.<br/><br/>The ideal candidate should be proficient in Python, TensorFlow, PyTorch, and cloud platforms (AWS, Azure, GCP) and should have hands-on experience in building end-to-end AI/ML models, MLOps pipelines, and scalable AI architectures.<br/><br/><b>Key Responsibilities : </b><br/><br/><b>AI/ML Development : </b><br/><br/>- Design, develop, and optimize ML/DL models for real-world applications across multiple industries and use cases.<br/><br/>- Collaborate with data scientists, engineers, and stakeholders to define model requirements and success metrics.<br/><br/>- Implement, test, and deploy AI models using frameworks like TensorFlow, PyTorch, or Scikit-learn to solve complex business problems.<br/><br/>- Develop reusable model components to accelerate development and experimentation cycles.<br/><br/>- Fine-tune models for accuracy, performance, and efficiency through hyperparameter optimization and architecture experimentation.<br/><br/>- Perform regular model evaluations to assess bias, drift, and robustness to ensure fairness and reliability.<br/><br/><b>MLOps & Deployment : </b><br/><br/>- Build scalable ML pipelines and deploy models using Docker, Kubernetes, and cloud services (AWS/GCP/Azure) for both batch and real-time applications.<br/><br/>- Establish automated CI/CD pipelines for model versioning, testing, and deployment using MLflow, Kubeflow, or SageMaker.<br/><br/>- Implement model monitoring, logging, and alerting to ensure continuous model performance and health checks post-deployment.<br/><br/>- Optimize AI solutions for low-latency and high-availability performance under varying workloads.<br/><br/>- Implement infrastructure as code (IaC) practices to maintain and deploy AI/ML infrastructure in a repeatable manner.<br/><br/>- Work with cross-functional teams to ensure that data security, compliance, and privacy policies are integrated into MLOps pipelines.<br/><br/><b>AI Architecture & Design : </b><br/><br/>- Architect end-to-end AI/ML solutions, including data ingestion, preprocessing, feature engineering, training, and inference pipelines.<br/><br/>- Define scalable, modular, and cost-effective AI architectures that align with enterprise goals and technology stacks.<br/><br/>- Design solutions to support both on-premise and cloud-based AI workflows for flexibility and scalability.<br/><br/>- Create reusable design patterns for AI model integration with existing enterprise systems, APIs, and databases.<br/><br/>- Implement best practices for model governance, including compliance with regulatory standards, auditability, and explainability.<br/><br/>- Work with business leaders to translate strategic objectives into AI-driven initiatives and roadmaps.<br/><br/><b>Data Engineering & Processing : </b><br/><br/>- Work with large, complex datasets to optimize ETL pipelines for AI model training and inference.<br/><br/>- Design and build scalable data pipelines using distributed processing frameworks like Spark, Hadoop, or Dask.<br/><br/>- Collaborate with data engineering teams to enhance data accessibility, quality, and reliability for machine learning workflows.<br/><br/>- Leverage SQL/NoSQL databases and data lakes to create data schemas and structures that support efficient ML operations.<br/><br/></p><p>- Implement feature stores and data cataloging tools to streamline feature reuse and data discovery across teams.<br/><br/>- Develop and maintain data governance frameworks to ensure data security, privacy, and compliance.<br/><br/><b>Research & Innovation : </b><br/><br/>- Stay updated with cutting-edge AI research and trends, including advancements in Generative AI, NLP, and Computer Vision technologies.<br/><br/>- Experiment with LLMs (Large Language Models) and generative AI models (e.g., GPT, Stable Diffusion) to develop innovative AI solutions.<br/><br/>- Prototype and evaluate emerging AI technologies to assess their applicability to business problems.<br/><br/>- Contribute to open-source AI/ML projects, research papers, and industry conferences to establish thought leadership.<br/><br/>- Collaborate with universities, research institutes, and external partners to foster innovation and access new AI capabilities.<br/><br/>- Identify opportunities to apply AI in unexplored areas to create competitive advantages for the organization.<br/><br/><b>Required Skills & Qualifications : </b><br/><br/><b>Programming & AI Frameworks : </b><br/><br/>- Proficiency in Python and key AI libraries such as TensorFlow, PyTorch, and Keras, with experience in both supervised and unsupervised learning models.<br/><br/>- Experience working with computer vision libraries like OpenCV and other image/video processing frameworks.<br/><br/>- Deep expertise in natural language processing (NLP) techniques using Transformers, BERT, GPT, and related models.<br/><br/>- Strong understanding of ML algorithms, deep learning architectures (CNNs, RNNs, LSTMs), and optimization techniques (e.g., gradient descent, hyperparameter tuning).<br/><br/>- Proficiency in at least one secondary programming language (e.g., Java, C++, or Go) to support AI integration into legacy systems.<br/><br/>- Experience with tools for model evaluation, visualization, and performance monitoring such as TensorBoard and ML visualization dashboards.<br/><br/><b>MLOps & Deployment : </b><br/><br/>- Hands-on experience with Docker for containerization and Kubernetes for orchestration of scalable AI model deployments.<br/><br/>- Familiarity with web frameworks like FastAPI and Flask for serving AI models and building RESTful APIs.<br/><br/>- Experience with cloud-based ML services such as AWS SageMaker, GCP Vertex AI, or Azure ML, including managing pipelines and infrastructure automation.<br/><br/>- Expertise in using MLOps tools like MLflow, Kubeflow, or Argo Workflows for model tracking, lifecycle management, and version control.<br/><br/>- Knowledge of serverless architecture and microservices deployment strategies to optimize cloud infrastructure costs and performance.<br/><br/>- Ability to implement monitoring, logging, and auto-scaling for AI models in production environments</p> (ref:hirist.tech)


Required Skill Profession

Computer Occupations



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your Data Dynamics Potential: Insight & Career Growth Guide


Advance your career or build your team with Expertini's smart job platform. Connecting professionals and employers in Pune, India.