I'm curious how you decided to model the data in your Neo4J database. How did you do the 'Suggested Readings' section? How does the cipher query look that drives that.
How do you like using AlchemyAPI? Is it doing all the NLP stuff for you?
The suggested readings sections pulls the most relevant concept of that article and finds connected articles with the same concept at a high relevance. This way suggested articles are more than just key word hits. It's all about relevance. I'm still continuing to tweak this query and there's a lot more that can be done with it such as matching sentiment and emotion. As the dataset grows I'll look to add a feature that pulls a list of articles based on a cluster of highly associated entities.
As for Alchemy, I've tried a number of different NLP APIs and, in my opinion, none of them have come close to matching Alchemy's accuracy. It does make mistakes but at a low enough level that it's easy to manually correct.
How are you finding Neo4J is handling the scale of reading and writing all these stories? I've had a positive experience so far but I'm only in the few thousands range.