What this is, is documentation of how Facebook implemented a technology stack that uses reinforcement learning techniques to do something. Namely: "Notifications at Facebook"
So what can other developers and business owners take from this? I don't see anything about the down stream product impacts. Does it impact conversion to paid rate for users? Does it reduce human labor? How does it improve benefits to users. All I see them write are two things:
"We observed a significant improvement in activity and mean- ingful interactions by deploying an RL based policy for certain types of notifications, replacing the previous system based on supervised learning."
I'm sorry but there is absolutely nothing rigorous in that statement. How are "meaningful interactions" defined? Hopefully they aren't still arguing the formula (more interaction = makes users better off).
"After deploying the DQN model, we were able to improve daily, weekly, and monthly metrics without sacrificing notification quality."
Improve for who? Well obviously Facebook and how much activity people have. Not necessarily if the user is actually getting more value from it.
What's the Return on Investment for this system?
Listen, I'm a huge fan of being open with business practices, research etc...I'm also obsessive about RL and making progress in the field.
What I can't stand however is lack of rigorous and tangible proof of how we're making things better for users or the society broadly with RL yet, or even in most cases getting positive ROI for the effort we're putting into ML/DL.
I've built these tools at scale so it hurts to say this, but the economics just aren't yet lining up here across the entire ML/DL industry and that has me worried that another AI winter is coming.
For example the scikit learn paper: http://www.jmlr.org/papers/volume12/pedregosa11a/pedregosa11...
I don't think it's a bad way to approach knowledge dissemination by the way, it is however indicative of the problem of reprouducability and explainability in AI broadly.
I bring this point up simply to be another voice stating that we need more rigorous methodology in AI research if we are going to make advances that are focused first on knowledge, rather than primarily applicability of technology.
Way back before AI was cool, there was a good paper on this [1] that is very relevant to today.
Quoting from the abstract:
"There are two central problems concerning the methodology and foundations of Artificial Intelligence (AI). One is to find a technique for defining problems in AI. The other is to find a technique for testing hypotheses in AI. There are, as of now, no solutions to these two problems. The former problem has been neglected because researchers have found it difficult to define AI problems in a traditional manner. The second problem has not been tackled seriously, with the result that supposedly competing AI hypotheses are typically non-comparable."
[1]https://link.springer.com/chapter/10.1007/978-1-4471-3542-5_...
I recommend that anybody who is interested in this read section 9 in this paper: "NOTIFICATIONS AT FACEBOOK". It brings into focus real ways that this technology is used and is useful.
Caffe2 install page: "We only support Anaconda packages at the moment. If you do not wish to use Anaconda, then you must build Caffe2 from source." => We are a company with a 400B+ market cap but are too lazy to support more than one installation configuration. Good luck dealing with dependency hell, poor ML grad student researcher.
MXnet install page: "You can either upgrade your CUDA install or install the MXNet package that supports your CUDA version." => We welcome you with open arms regardless of your configuration! No matter your configuration we have an pre-built package for you!
I agree that installing without docker is a pain, but you should follow our install guide. Particularly for Caffe2, it's included in PyTorch 1.0 so you don't have to install it separately :-).
I guess that's not too unusual from their opensource contributions at this point.
Historically, we have used supervised learning models for predicting click through rate (CTR) and likelihood that the notification leads to meaningful interactions.
We introduced a new policy that uses Horizon to train a Discrete-Action DQN model for sending push notifications to address the problems above. The Markov Decision Process (MDP) is based on a sequence of notification candidates for a particular person. The actions here are sending and dropping the notification, and the state describes a set of features about the person and the notification candidate. There are rewards for interactions and activity on Facebook, with a penalty for sending the notification to control the volume of notifications sent. The policy optimizes for the long term value and is able to capture incremental effects of sending the notification by comparing the Q-values of the send and don’t send action.
Google's framework is at https://github.com/google/dopamine and I don't believe it's generated discussion on HN before.