type
status
date
slug
summary
tags
category
icon
password
Created time
Aug 4, 2023 05:38 PM
😀
In machine learning community, the topic of ML, applied science and software development always looms large. Hope this series of Q&A post will address some of your concerns.

🚀 It’s all about execution.

Do you find yourself brainstorming a phenomenal AI/ML concept?If that's the case, have you figured out your roadmap to turn it into reality?
The epic tale of Steve Jobs' comeback to Apple is a staple of tech lore. 🍎 His famous interview outlines a detrimental mindset plaguing Apple at the time: believing that a stellar idea constituted 90% of the task, whilst overlooking the fine craftsmanship needed to morph it into an exceptional product. This mindset led to ill-conceived products, like the notorious Apple Newton.
The takeaway? 💡 Execution is the name of the game! It's where the brunt of the effort must be channelled. Developing AI/ML systems is no cakewalk. It's a blend of intricate science and engineering. 💻 A startling fact often thrown around is that 90% of ML models never see the light of production. Hence, I emphasize the significance of MLOps just as much as the underlying ML science. Pondering this, I've identified three common pitfalls of AI/ML implementation:
1️⃣ A science team devoid of MLOps backing is doomed to underperform. Remember, scientists aren't engineers. Taking their research to production can be a real hurdle. And while many scientists can code, productionizing AI/ML systems demands a different kind of expertise. If your scientists are swamped with implementation, their time for scientific exploration (the reason you brought them onboard) is compromised. I'd suggest a 2:1 ratio of MLOps engineers to scientists.
2️⃣ A unified MLOps team serving multiple science teams is inefficient. Here's why: scientists require constant support from MLOps. They need to understand what's feasible and how to transition their research prototypes into fully operational systems. Having a central MLOps team juggling multiple science teams leads to continual context switches. They may not be available early enough to guide the scientists, leading to wasted efforts on non-feasible prototypes. A single leadership overseeing dedicated science and MLOps teams seems to be a more sensible approach.
3️⃣ An MLOps team operating without scientists' inputs will miss the mark. Just like scientists lean on MLOps, MLOps engineers need scientists' continuous insights. It's a two-way street. To fine-tune and troubleshoot systems in the making, MLOps engineers require scientists' expertise. It's crucial to view scientists as the customer—design systems based on scientific needs and involve scientists at every step.
As AI enthusiasts, it's important to remember that an idea, no matter how brilliant, is only as good as its execution. So let's roll up our sleeves and get to work! 💪🚀
 

🗼 Let’s strike out some examples

Navigating the intricacies of machine learning (ML) science in harmony with Machine Learning Operations (MLOps) can feel like mastering the art of a complex dance. This synchrony is crucial to the creation of exceptional AI/ML systems. Here are my five guiding lights illuminating this dance floor:
  1. 🧪 Consider ML scientists as patrons of MLOps. The secret lies in defining the right boundaries. Here, the science needs take precedence as system requirements, and the onus is on MLOps to devise systems that empower scientists. This doesn't make MLOps a secondary force, but underscores the necessity of a strong, symbiotic partnership, reminiscent of a business-customer relationship.
  1. 🛠️ Prototypes: your secret weapon. The ideal bridge connecting ML science to MLOps is paved with working prototypes, constructed by the scientists. MLOps then takes the reins, converting the prototypes into production models. The comparative study of these outputs ensures validation and helps in pinpointing any discrepancies. Tools like Jupyter notebooks serve as trusty allies in building ML prototypes.
  1. 📍 Strike a balance in proximity. If MLOps hovers too far from the science, the resulting systems will fall short of meeting the scientists' needs. Conversely, a too-close-for-comfort proximity could lead to a gridlock of processes. It's about achieving the 'Goldilocks' distance - not too close, not too far. Whether separate teams under one leader, or within one team, a centralized MLOps team catering to multiple science teams often turns out to be a misstep.
  1. 🗣️ Encourage a two-way learning street. The exchange between ML science and MLOps should be an ongoing dialogue. Scientists rely on MLOps for insight on what can be achieved within the architectural boundaries, while MLOps must stay informed about scientific endeavors to ensure their systems align with future needs. Regular review sessions alternating between scientific and engineering aspects can facilitate this exchange.
  1. 👥 Cultivate a blame-free environment. Crafting AI/ML systems can be a demanding task, replete with scientific, architectural, and cost hurdles. Unless managed carefully, it's easy for frustration and blame to creep into the team dynamics. Open discussions about pain points can help defuse tension and find resolutions. Aim to tackle major issues head-on, while reaching workable compromises for smaller hurdles.
Mastering the interplay between AI/ML and MLOps is an art and a science. Experimentation with various techniques and processes is key to finding what works best for your unique circumstances. 💡Do you have any suggestions from your own experience to share?
 
💡
Let me know what else you want to hear about this topic.
Navigating the Role of a Machine Learning Manager at Amazon 🧭 (3min read)ML Team Leader Q&A Part 2 - ML Career Path (5min read)
  • Twikoo
  • WebMention