IIRC, for OS X, you must create the link from the host. It works because `ln` doesn't check whether the destination of the link exists. So running
ln -s /home/vagrant/node_modules node_modules
in the shared folder from the host will create a link that is functional in the guest.No idea for Linux hosts.
A lot of module writers don't care about Windows.
A big part of the node ecosystem is that there is a module for practically anything. While node core has great support for windows, and also helps aid developers in writing code that is windows compatible, from what I have seen, its not often followed.
Any number of simple things break modules on windows, using a '/' as a path separator instead of path.join, native modules, bash or shell scripts used in pre- or post-install scripts and probably a hundred other small ways to screw it up. Even trying to make your own module Windows compatible is often thwarted by a dependency that doesn't. Like Java's promise of "Write once, run anywhere", its a great idea, but hasn't completely panned out.
I really hope that as the node community matures, there can be more of a focus on making sure things work on Windows. Node is a great platform partially because of all those modules and getting them all working will make writing cross platform apps and services a heck of a lot easier.
It seems ridiculous to me how many folders are created when I just need a couple of node modules to do something.
As a result I almost always put all my projects at the root something like /src to reduce the odds of running into these problems.
"You are the one playing games - calling core parts of Windows like Explorer "3rd party tools", and suggesting that not supporting long paths is a bug. Microsoft have made it clear repeatedly that non-support for long paths is not a bug, and not something that will change.
A package manager creating paths that do not work with the majority of the software written for an OS, then claiming compatibility with the OS, is playing games, at your users expense."
At the end of the day Windows wont be the thing changing. haha.
npm's strategy of nesting like that made things simpler when it was more of a new project, but it's time to come up with something more robust.
I like Maven's approach better; a centralized repository directory. In Node / NPM's case, given that each library has a simple name, I can imagine a directory structure ~/.node_modules/package/1.0.0. Severely reduces filesystem depth, and probably fixes re-downloads of the same package / versions too. Only requirements are a rewrite or update of node's require method, NPM's package install directory and maybe some more strictness about NPM releases.
I've been writing Python and Go for the last 12 months, and by comparison their module systems are absolute disasters.
But it's not a NodeJS problem it is a NPM problem.NodeJS has other problems though(the lack of governance by a dedicated foundation is the most important one,the obvious second one is nodejs dependency on what google does/doesnt do,since google maintains V8).
Google maintaining V8 is one of the big reasons why Node has been successful. There is no way an independent Node.JS foundation would have the resources to optimize and test JS like the Chrome team. So far, the problems have been minor and V8 has done a decent job keeping up with upcoming JS standards.
So this isn't a problem.
> 1 function while requiring 10 other modules that have just 1 function themself that require 10*10 other modules
As of now NPM is working (in spite of these problems) because those modules are tiny js files. I agree it would be beneficial to have better package management.
In fact, that's exactly what's giving me problems right now. My browserify task breaks due to the NoYieldInGenerator error in parsing. The dependency (esprima parser) has been patched upstream, but now that has to make its way into 4 dependencies inside browserify, and then browserify has to update itself. My alternative is to make 5 forks just to solve this. :/
It would have been great if they all used a single shared lib, and even better if I could manually override dependency versions of my dependencies.
[0] https://github.com/joyent/node/blob/master/lib/module.js
The strategy I took with node_modules/, components/ and bower_modules/ was to just follow whatever folder structure it found as far as necessary and look for manifest files (package.json, bower.json and component.json) and register modules at the path of wherever one of those three file types is found. This approach allows any folder organization whatsoever. This information was then saved to a key-value store and the store is only updated when the mtime of the folder changes. This makes startup relatively fast since folders are only re-indexed if something has genuinely changed.
Writing an algorithm that is dependent on the folder structure to me is a fundamentally bad idea most of the time, especially when you have a manifest file that better identifies a module. Additionally, for modules with semver ranges or installed from unusual locations (i.e. URIs: git:// git+ssh://, https://, etc.), you can write an additional metadata file with install information and version locking information (like gemfile.lock)
If people don't rely on the file system organization as an API, npm and node's require() algorithm can trivially reorganize or even use different folder organization schemes on different operating systems if there were a good reason to do so.
Simple-enough fix, though: drop an extra directory in there, call it "state", and have each state-container in that dir be named after the the hash of the dependency path you would have traversed to load it. Virtualize modifications to packages into those state-containers.
(This is also pretty much how Windows protects itself from programs that think keeping their state in the Program Files directory is a good idea.)
Naming Files, Paths, and Namespaces on MSDN explains the details: http://msdn.microsoft.com/en-us/library/windows/desktop/aa36...
Long paths are just most ugly to use: "Long paths with the \\?\ prefix can be used in most of the file-related Windows APIs, but not all Windows APIs. For example, LoadLibrary, which maps a module into the address of the calling process, fails if the file name is longer than MAX_PATH. So this means MoveFile will let you move a DLL to a location such that its path is longer than 260 characters, but when you try to load the DLL, it would fail. There are similar examples throughout the Windows APIs; some workarounds exist, but they are on a case-by-case basis."
An hour of swearing was enough to convince me to stop using windows for this entirely.
What's the design reason for Windows tools to only have a max of 260 file path chars? If it's not the OS, but the tools he is using, why not use tools that work? Isn't this the tools bug?
Honestly, if he is doing web development, his life will be easier with a linux distro.
So in C you can just say `char buff[MAX_PATH] = {0};` and not worry about heap allocation.
That's also the reason why it can't ever be extended - because MAX_PATH is compiled into every binary and you would create instant buffer overflow vulnerabilities in a lot of software.
It was a reasonably good idea at the time the decision was made.
I would use this a lot, but many errors trying to apt-get install packages.