I took this course and Dan Boneh's cryptography course and both were truly excellent.
Update: You have to go into the individual courses within the specialization and the enroll popup will have an audit option.
First Course is here: https://www.coursera.org/learn/neural-networks-deep-learning...
There are two types of courses in Coursera- free and paid.
In case of the paid courses, you can go to the course and navigate to the "Buy Subscription" page and click on "audit the course". You can watch all the videos for free, but you don't get access to quizzes and programming assignments (you never know what a web search will turn up ;)) ⊕. You do not get a certificate by completing a course or completing all courses of a "Specialization".
In the case of a free course, you get access to all the videos, quizzes, and assignments. You don't get any kind of certificate. Instead of going to subscription page, you can just click "Enroll" and choose the no certification option.
There are some great courses in the free tier (videos + assignments, no certs) as well. Dan Boneh's Cryptography and Grossman's Programming Languages A, B, C come to mind. Also Model Thinking by Scott Page.
There were some great discussions on HN in the past. [0][1][2]
⊕ There are courses where duplicates of paid assignments and quizzes are provided under "Practice Assignment" as opposed to "Graded Assignment". Like Martin Odersky's Functional Programming Principles in Scala MOOC.
[0]: https://news.ycombinator.com/item?id=25245125
(specifically the crypto course sounds interesting)
The most practical takeaway I got from Ng's course was the dangers of under and overfitting your data and techniques for detecting when you make that mistake.
Of course, you want to have some understanding of what's going on under the covers but, for a lot of people, starting from first principles is quite hard and isn't really necessary.
ML is dominated by gigantic datasets and massive computing powers, something individuals will not have a lot of.
It is unlikely that you could build a major product with it, but it could tech you neat tricks to speed up some parts of work. Also, similar to cs101, it is a necessary first step towards a career in ML. So might as well do it.
I know a bunch of business analysts and data analysts who have gotten a job based on what they learnt in this course. Ofc, they also got some stem degre alongside it, but this course made a difference.
The things I learned here helped me gain a solid foundation, which, in turn helped me learn Deep Learning.
And Deep Learning feeds me now.
The good thing about this course is that it is not Math-shy. It is not rigorous in terms of Math, like there are no proofs and so on. But Math is omnipresent here.
Andrew Ng's MOOC is among the best game in town. Ng is among the best teachers I have ever seen.
You could use ML in your job/company but then you dont need this course, you just use a ML product.
See this course as a hobby thing, or if you are in HS and want to start preparing for college, otherwise there are better uses of your time.
ML product?
Admittedly I also bought textbooks and worked through tutorials as well.
https://news.ycombinator.com/item?id=31204055
I certainly was excited when I saw this headline. Thought maybe it was early
https://omscs.gatech.edu/cs-7641-machine-learning
https://omscs.gatech.edu/cs-7642-reinforcement-learning (I took this before ML but its supposed to come after. There is some overlap. Probably my favorite graduate course.)
https://omscs.gatech.edu/cs-7646-machine-learning-trading (IMO not amazing)
Much more basic (took this before OMSCS):
https://www.udacity.com/course/intro-to-machine-learning--ud...
I'm sure there are many more.
It's a recorded version of a real Caltech undergrad course, and it's focused on understanding the math behind these algorithms, not just applying black-box ML libraries.
It's much less practical, but I feel like it teaches you more.
[1] https://www.coursera.org/learn/machine-learning/ [2] https://www.coursera.org/learn/neural-networks-deep-learning...
Octave is very easy to learn if you have previous programming experience.
You won't _write_ programs a lot. There will be cookie-cutter code, and you will fill in some blanks. A line here a line there.
Trust me, Octave wasn’t a deal-breaker if you tried. And a lot of formulae were the code.
But I've never seen actual production anything in Matlab. Did Matlab provide something at the time others did not? If so, how did they transfer MatLab to running production models? Or did they create a model with basic outcomes - and then code a representation of it in C++, etc?
It was around the time I was in university that Python really matured for numerical computing, but professors (as opposed to grad students) were likely to be already familiar with Matlab, so there wasn't much reason for them to learn Python. Andrew Ng was already a mid-career researcher when he made his course, which was probably based on older materials (I also learned basic neural networks in my numerical computing class in 2008), so it made sense for him to continue to use Matlab, especially because Octave exists as an open-source reimplementation of the basic functionality.
These days, you wouldn't use anything else but Python for ML, at least until you really productionize the implementation at a large scale, at which case you might rewrite in C++ or Rust (I don't know if they even bother rewriting these days when most of the computation happens in GPUs or TPUs). And it's my understanding, although I'm not really too familiar these days, that Matlab has mostly pivoted into providing a toolbox of all sorts of esoteric numerical methods for engineering-related tasks like finite element analysis, as well as hardware simulation (using Simulink).