There are 2 levels to ML/AI, being a researcher and being an engineer. The researcher actually creates new models, architectures, etc. You're going to need to be talented at math, as well as pursue a PhD to have enough time to absorb some subset of the material to have a good understanding. (A masters was good but not enough time for me personally).
Then there is engineering which is leveraging the creations of the very smart PhDs. At least in my experience, the shallow level is basically fine-tuning models to your use case, which does require an understanding of some things like loss functions, train/validation/test sets, but it's not too complicated.
Everyone that asks me how to learn machine learning, I advise them to read Hands on Machine Learning by Aurélien Géron cover to cover. When I first started my masters I did this and it helped immensely because it was easy to understand, was broad, and was interested usually from an application perspective.
From there, I would suggest learning PyTorch (starting w/ Keras is ok too, but don't stay there too long, and avoid Tensorflow), as it's much easier to develop with. I always learn best with a personal project, so maybe see if there is a real life "problem" you'd be interested in solving, like classifying different pets from each other or something like that.
It'll take a while to build up your skills, so going to school is of course an option, but with dedication I think you can also accomplish this solely with side projects and learning on your own. Best of luck!
1) Did your Masters cover non-deep-learning vision (classical vision?) in sufficient detail? There is a ton of math in there. Going from being a shallow user of OpenCV to a deep one seems a big jump. I'm not sure a Masters focused solely on classical vision would get someone there (let alone one covering other things like ML, DL, etc.).
2) Did you end up training large models from scratch or is it all just fine-tuning? I am trying to do the former and I realize getting things to scale for from-scratch training is a whole other topic. I suspect getting things ready for inference would be similar.
Thx!
2) I did a thesis for my Masters program, using generative adversarial networks (GANs) for image compression. It was by no means novel, or a breakthrough, but what I did learn (and this is so obvious it's painful to write this) is that you should pretty much never train from scratch and that you should always use transfer learning. As far as what I did at my last company, it was basically taking state of the art models from the MMDetection python package, fine tuning them to our use case, and then deploying them. So I wasn't really doing anything from scratch.
Happy to chat more about your specific use case if you're interested! You can email me at zbellay at gmail dot com.
The jump to core ML is a bit trickier. Competing with people with PhD's is a drag. Wish people could also give me some tips there.
ML guys build the fun stuff, go to conferences, etc. We maintain their stuff and work the 60 hour weeks answering pages.
It's mind numbing. Arguably some of the most boring work I've ever done especially knowing that you're just a CI/CD robot. Nothing has motivated me towards looking into starting a business more than watching other people have fun and you cleaning up their messes.
MLOps is what you're describing, and it's probably the number one field I'd recommend someone to go down right now as a backend dev.
I watched a friend of mine who went from maybe some python courses to writing some really impressive ML stuff as her first project within a few months and with some help from some people who know their stuff a bit, which I found pretty impressive. I think as long as you're building things on top of what's available out there that you can find tons of utility in all the solutions that have been coming up in the past years without a ton of effort. Try dipping your toes into something simple like object recognition and you'll find it's pretty easy.
If you're talking about getting into the field on a level where you're actually developing these technologies themselves then I hope your math is around college-level. Reading deeper into the docs of the tools I'm using and they're showing calculus and linear algebra to me. I don't pretend to understand it very well.
Where it would start to get tricky is if you have to do more than 'consume' ML libraries. Everyone can learn how to use a library or API, and getting some training going isn't all that hard either. But if you have to build said library, or come up with a new modelling method, that's where it's a real transition and gets really hard to simply 'switch'. It's also one of those areas where a PhD really helps, not from a "certification-as-entrypass" perspective, but because this gets down to hard science. For most companies, however, that's a point they never reach.
I built https://FakeYou.com as a side project, and it blew up. I quit my job after I realized the potential, added monetization, and started to broaden what we do.
I've been working on https://storyteller.ai for a year and plan to launch our platform soon.
Both of these tool sets reinforce one another.
I'm hiring folks that were engineers that want to do AI instead. Please reach out! Our stack is Rust / Unreal / Pytorch / k8s.
"My server blew up" is likely negative.
"My youtube channel blew up" is likely positive, it means it went viral.
If a business blew up, it's almost certainly positive.
> Has anyone made the official career pivot to the ML/AI field?
If you're talking about using ML/AI related tools and algorythms and taking advantage of what they can do, maybe for data processing etc then that's not really too hard an ask, infact these days depending on your role there could probably be a natural progression into these areas.
The problem comes from the core of these types of work, so creating the algorithms, building new model, processing the raw data into something that is useful and this involves even being really close to the hardware level too. I find that it's hugely academic and mathematical focused, for obvious reasons.
It's certainly stuff that flies over my head and for me personally no matter how interested I am in it, I don't think it will ever 'click' for me.
But I’ve also seen colleagues pivot into data engineering. They’ve done it within the same company by simply asking, I guess? When there’s a role available and you do your homework there’s a chance to change the field.
But now with the general purpose power of the ChatGPT API / OpenAI Embeddings, things like Stable Diffusion, and Eleven Labs, etc., and the expectation of new models coming out that have visual understanding integrated with the large language model, and quite possibly even more intelligence, I don't feel that ML is a good path for me. It makes more sense for me to just leverage the APIs to build applications.
I get the impression that optimized (multimodal) transformer models are going to be readily adaptable to most tasks and so its much less important going forward to do "real research" in order to get results.
As soon as the GPT3 API came out I started experimenting and moving towards launching https://aidev.codes. So now I have quite a bit of experience with prompt engineering for GPT, and a few other AI-related APIs. I am looking to raise money for marketing aidev.codes. If anyone wants to hire me, see the email in my profile.
My approach has been to start at a high-level, with a specific goal in mind, and to progressively go deeper and deeper. The specific goal part has been really helpful IMO. It prevents sort of aimless shuffling about and provides a good metric to see if you're making progress. When I started I was basically just focusing on producing training data and treating the models, which were open-source on GitHub, as a black-box. At this point I've made a lot of modifications to the actual model code itself and I'm learning a ton. There's of course a bunch of adjacent skills that are similar to traditional backend skills, but slightly different. Like autoscaling for example, there aren't many autoscaling solutions for GPU VMs yet, there are some startups working on this space, but IMO it's good to have a rock-solid hosting solution that you don't have to worry about too much.
I opened up the beta of my product just last week at https://bondsynth.ai/signup
The goal is to either have my startup succeed or to move into an ML engineer position at a small-to-medium sized company.
I got somewhat lucky as the feature my team owned was powered by ML. After gaining credibility on the mobile side I worked with my manager to make the transition to backend. Did backend for about a year and was fortunate with the timing that my team was launching a new product with a model it owned. I got to work closely with ML engineers on it and eventually I became the DRI of the feature along with the model. After 2 more years I came to the realization that ML was moving a bit faster than I could keep up reading white papers about and decided to pivot to ML Ops. This let me leverage my strengths in distributed computing that I developed, be very close to ML without having to study math in my spare time
That's sort of what I've been doing since. It's much more interesting than solving botched up React Hooks, but there is about the same ratio of tedium:interesting work. I happen to like math, someone who does not like math... they're gonna go a little batty I think.
I haven't even raised my rates. I'm having enough fun with it.
So I think the answer you're after is either "Luck" or "Masochistic streak" ? :)
In short, all this AI/ML stuff is just buzzword, ultimately the work you will do in almost all these companies is regular run of the mill work nowhere related to ML or AI
All the ML engineers I have encountered thus far have a Ph.D in physics or math. No way I can compete with that level of education!
It's not the usual path, but it's not impossible either.
Building the thing isn't supposed to be the challenge.
I joined Grab.com on their Safety team and started working on their face recognition technologies. This got my feet wet in ML. Now I am leading their content moderation efforts.
TL;DR: Find an "ML adjacent" engineering role and take on ML/AI work.
"ML adjacent" roles could be, content moderation, safety, ads, and search.
I’m happy with the work I do. The Company culture requires some thoughtful navigation. I’m also happy with the benefits (like travel and working from asian).