Of which the end result will be that the desired functionality will be somehow hacked within the plugin or will not be available at all.
Problem with such plugin based architecture is that it relies on a well designed interface. Person which designs the interface needs to have very good idea of how that interface will be used in the future, which is difficult / often impossible thing to do.
When business requirements change, you then have the difficult dilemma - just insist on "no", introduce a minimal hack, redesign interfaces to support the use case in a clean way (possibly big task) etc.
> your encapsulation suffers death by a thousand cuts. you can end up with a de facto monolithic codebase that also has a complicated plugin interface that doesn't really encapsulate anything.
Yes, worst outcome of all. In reality, plugin based architecture is no silver bullet. It can be very counter productive, especially when you're figuring out what you actually want to build, as you build it.
one thing I will add is that every new feature does not have to be a plugin just because you have a plugin interface. "implement it directly in the core" is a perfectly valid fourth choice. some things just aren't suited to a plugin implementation.
Yes. Quoting my answer to your reply's parent:
""" It depends on the scope of the functionality. For example, right now, authentication and token generation are in the core, but it's okay right now because authentication spans across the whole product.
We eventually will extract it out, so we could use it as a component in another product, but for now, it's not inappropriate to leave it in the core. """
It depends on the scope of the functionality. For example, right now, authentication and token generation are in the core, but it's okay right now because authentication spans across the whole product.
We eventually will extract it out, so we could use it as a component in another product, but for now, it's not inappropriate to leave it in the core.
>When business requirements change, you then have the difficult dilemma - just insist on "no", introduce a minimal hack, redesign interfaces to support the use case in a clean way (possibly big task) etc.
Some days are easier than others.
>Yes, worst outcome of all. In reality, plugin based architecture is no silver bullet. It can be very counter productive, especially when you're figuring out what you actually want to build, as you build it.
That's why I talked with the scope and abstraction level. We tend to make the few and loose assumptions that get us a lot of leg work done automatically. We won't make further assumptions just for 1% advantage. And if we do, there's a fallback. For example, we say that a plugin has a certain structure and expects say an icon file. If it's not there, it's not there, the plugin is loaded but just not displayed. We issue a warning, in case it was by mistake, but the application does not break.
Very few and loose "specs" that one can go through quickly and easily without looking at a checklist or something.
Again, it's not a panacea. The underlying assumption in what I wrote is that neither I nor the reader believe in silver bullets. It's not a dichotomy. The question of course is not whether a plugin architecture solves all problems and makes bacon or solves nothing, and I may have been unclear in my message. My point was that it's one of the most useful things we have done because it reduced the amount of work we had to do. We still wake up and build product.
One reason we did this was because we built custom, turn-key, ML products for large enterprise. Complete applications, from data acquisition and model training to "the JavaScript", admin interface, user management, etc.
Now... these large enterprise clients were in a sector. We could hardly sell the product to other similar clients because we couldn't just pick and choose which component or features to put on a skeleton.
It took us a lot of time, because these projects were both "software engineering" and "machine learning". In other words, we were toast. The worst of both worlds, as we were doing complete applications that even allowed their people to train models themselves.
It took a toll on morale. At some point working on eight different projects with different code bases and subsets of the team. We were fed up with this. We wanted to do things differently. We wanted to be able to get the time it took to ship the project the closest possible to the time it took to train models, which we historically did rapidly. It was all the rest that took time.
Total time = time to define problem + time to get data + time to produce models + time to write application + a big hairy epsilon
We wanted to bring "Total time" to its irreducible form. We didn't want to keep writing different applications for clients. We knew how to do it, but we did it enough times for several clients to notice patterns we wanted to extract into components. We also were losing time with the ML project lifecycle (experiment tracking, model management, collaboration, etc). We didn't want to ask the question "Which model is deployed again? What data produced that model? I tried your notebook on my machine, it doesn't work! DS: Hey, Jugurtha... Can you deploy my model? Jugurtha: I'm busy right now. I'll do it as soon as possible".
So we started building our ML platform[0] to remove as much overhead as possible, while being flexible. For example, one of our design goals is that everything one can do on the web app, they should be able to do with an API call.
- [0]: https://iko.ai