I actually remember early in my career working for a small engineering/manufacturing prototyping firm which did its own software, there was a senior developer there who didn't speak very good English but he kept insisting that the "Business layer" should be on top. How right he was. I couldn't imagine how much wisdom and experience was packed in such simple, malformed sentences. Nothing else matters really. Functional vs imperative is a very minor point IMO, mostly a distraction.
Your advice is the opposite of "functional core, imperative shell". The FCIS principle has IS which is generic, to be simple, because it's usually hard to test (it deals with resources and external dependencies). So by being simple, it's more unit testable.
On the other hand, FC is where the business logic lives, which can be complex and specific. The reason why you want that "functional" (really just another name for "composable from small blocks") is because it can be tested for validity without external dependencies.
So the IS shields you from technicalities of external dependencies, like what kind of quirks your DB has, or are we sending data over network or writing to the file, or does the user inputs comands in spanish or english, do you display the green square or blue triangle to indicate the report is ready, etc.
On the other hand, FC deals with the actual business logic (what you want to do), which can be both generic and specific. These are just different types of building blocks (we call them functions) living in the FC.
FCIS is exemplified by user-shell interaction. The user (FC) dictates the commands and interprets the output, according to her "business needs". While the shell (IS) simply runs the commands, without any questions of their purpose. It's not the job of IS to verify or handle user errors caused by wrong commands.
But the user doesn't do stuff on her own; you could take her to a pub and she would tell you the same sequence of commands when facing the same situation. In that sense, the user is "functional" - independent on the actual state of the computer system, like the return value of a mathematical function is only dependent on the arguments.
Another example is MVC, where M is the FC and VC is the IS. Although it's not always exactly like that, for variety of reasons.
You can think of IS as a translator to a different language, understood by "the other systems", while the FC is there to supply what is actually being communicated.
"Functional core, imperative shell" (FCIS) is a matter of implementing individual software components that need to engage with side-effects --- that is, they have some impact on some external resources. Rather than threading representations of the external resources throughout the implementation, FCIS tells us to expel those concerns to the boundary. This makes the bulk of the component easier to reason about, being concerned with pure values and mere descriptions of effects, and minimizes the amount of code that must deal with actual effects (i.e. turning descriptions of effects into actual effects). It's a matter of comprehensibility and testability, which I'll clumsily categorize as "verification": "Does it do what it's supposed to do?"
"Generic core, specific shell" (GCSS) is a matter of addressing needs in context. The problems we need solved will shift over time; rather than throwing away a solution and re-solving the new problem from scratch, we'd prefer to only change the parts that need changing. GCSS tells us we shouldn't simply solve the one and only problem in front of us; we should use our eyes and ears and human brains to understand the context in which that problem exists. We should produce a generic core that can be applied to a family of related problems, and adapt that to our specific problem at any specific time using a, yes, specific shell. It's a matter of adaptability and solving the right problem, which I'll clumsily categorize as "validation": "Is what it's supposed to do what we actually need it to do?"
Ideally, GCSS is applied recursively: a specific shell may adapt an only slightly more generic core, which then decomposes into a smaller handful of problems that are themselves implemented with GCSS. When business needs change in a way that the outermost "generic core" can't cover, odds are still good that some (or all) of its components can still be applied in solving the new top-level problem. FCIS isn't really amenable to the same recursion.
Both verification and validation activities are necessary. One is a matter of internal consistency within the component; the other is a matter of external consistency relative to the context the component is being used in. FCIS and GCSS advise on how to address each concern in turn.
Probably many reasons for this, but what I've seen often is that once the code base has been degraded, it's a slippery slope downhill after that.
Adding functionality often requires more hacks. The alternative is to fix the mess, but that's not part of the task at hand.
Another factor, and perhaps the key factor, is that contrary to OP's extraordinary claim there is no such thing as objectively good code, or one single and true way of writing good code.
The crispest definition of "good code" is that it's not obviously bad code from a specific point of view. But points of view are also subjective.
Take for example domain-driven design. There are a myriad of books claiming it's an effective way to generate "good code". However, DDD has a strong object-oriented core, to the extent it's nearly a purist OO approach. But here we are, seeing claims that the core must be functional.
If OP's strong opinion on "good code" is so clear and obvious, why are there such critical disagreements at such a fundamental levels? Is everyone in the world wrong, and OP is the poor martyr that is cursed with being the only soul in the whole world who even knows what "good code" is?
Let's face it: the reason there is no such thing as "good code" is that opinionated people making claims such as OP's are actually passing off "good code" claims as proxy's for their own subjective and unverified personal taste. In a room full of developers, if you throw a rock at a random direction you're bound to hit one or two of these messiahs, and neither of them agrees on what good code is.
Hearing people like OP comment on "good code" is like hearing people comment on how their regional cuisine is the true definition of "good food".
I think this is a very common mistake. You've spent years, maybe decades, writing code and now you want to magically transfer all that experience in a few succinct articles. But no advice that you give about "the correct philosophy" is going to instantly transfer enough knowledge to make all large companies write good code, if only they followed it. Instead, I'm sure it's valuable advice, but more along the lines of a fragment within a single day of learning for a diligent developer.
A company I worked recently had a more extreme version of this mistake. It had software written in the 1980s based on a development process by Michael Jackson (no, not that one!), a software researcher that had spent his whole career trying to come up with silly processes that were meant to fix software development once and for all; he wrote whole books about it. I remember reading a recent interview with him where he mourns that developers today are interested in new programming languages but not development methodologies. (The code base I worked on was fine by the way, given that it was 40 years old, but not really because of this Jackson stuff.)
I'm reminded of the Joel on Software article [1] where he compares talented (naturally or through experience) developers as being like really talented expert chefs, and those following some methodology as being like people working at McDonald's.
[1] https://www.joelonsoftware.com/2001/01/18/big-macs-vs-the-na...
Good old "Programming as Theory Building". It's almost impossible to achieve this kind of transfer without already having the requisite lived experience.
I still find myself debating this internally, but one objective metric is how smoothly my longer PTOs go:
The only times I haven’t received a single emergency call were when I left teammates a a large and extremely specific set of shell scripts and/or executables that do exactly one thing. No configs, no args/opts (or ridiculously minimal), each named something like run-config-a-for-client-x-with-dataset-3.ps1 that took care of everything for one task I knew they’d need. Just double click this file when you get the new dataset, or clone/rename it and tweak line #8 if you need to run it for a new client, that kind of thing.
Looking inside the scripts/programs looks like the opposite of all of the DRY or any similar principles I’ve been taught (save for KISS and others similarly simplistic)
But the result speaks for itself. The further I go down that excessively basic path, the more people can get work done without me online, and I get to enjoy PTO. Anytime i make a slick flexible utility with pretty code and docs, I get the “any chance you could hop on?” text. Put the slick stuff in the core libraries and keep the executables dumb
> Nothing else matters really. Functional vs imperative is a very minor point IMO, mostly a distraction.
I'm torn on this. This really is the faster way to higher quality.
OTOH, if more developers knew this, I wouldn't be so much more faster when I create my systems for clients. I'd just be a "normal 1x dev".
I like implementing features, sans AI-assistance, in my LoB applications faster than devs with Claude code doing so on their $FRAMEWORK.
Business layers should be accessible via an explicit interface/shape that is agnostic to the layers above it. So if the org decides to move from mailchimp to some other email provider the business logic can remain untouched and you just need to write some code mapping the new provider to the business logic's interface.
Maybe our visualizations are mixed up, but I always viewed things like cloud providers, libraries etc. as potentially short lived whereas the core logic could stick around forever.
...
> Coupling across layers invites trouble (e.g. encoding business logic with “intuitive” names reflecting transient understanding). When requirements shift (features, regulations), library maintainers introduce breaking changes or new processor architectures appear, our stable foundations, complected with faster-moving parts, still crack!
https://alexalejandre.com/programming/coupling-language-and-...
For concerns of code complexity and verification, code that asks a question and code that acts on the answers should be separated. Asking can be done as pure code, and if done as such, only ever needs unit tests. The doing is the imperative part, and it requires much slower tests that are much more expensive to evolve with your changing requirements and system design.
The one place this advice falls down is security - having functions that do things without verifying preconditions are exploitable, and they are easy to accidentally expose to third party code through the addition of subsequent features, even if initially they are unreachable. Sun biffed this way a couple of times with Java.
But for non crosscutting concerns this advice can also be a step toward FC/IS, both in structuring the code and acclimating devs to the paradigm. Because you can start extracting pure code sections in place.
> having functions that do things without verifying preconditions are exploitable
Why would you do this? The separation between commands and queries does not mean that executing a command must succeed. It can still fail. Put queries inside the commands (but do not return the query results, that's the job of the query itself) and branch based on the results. After executing a command which may fail, you can follow it with a query to see if it succeeded and, if not, why not.
https://en.wikipedia.org/wiki/Command%E2%80%93query_separati...
What’s being described here is something lower level, that you keep as much code as you can as a side-effect-free “pure functional core”. That pattern is useful both for the “command” and “query” side of a CQRS system, and is not the same thing as CQRS
Performance and re-use are two possible reasons.
You may have a command sub-routine that is used by multiple higher-level commands, or even called multiple times within by a higher-level command. If the validation lives in the subroutine, that validation will be called multiple times, even when it only needs to be called once.
So you are forced to choose either efficiency or the security of colocating validation, which makes it impossible to call the sub-routine with unvalidated input.
CQS will rely on composition to do any If A Then B work, rather than entangling the two. Nothing forces composition except information hiding. So if you get your interface wrong someone can skip over a query that is meant to short circuit the command. The constraint system in Eiffel I don’t think is up to providing that sort of protection on its own (and the examples I was given very much assumed not). Elixir’s might end up better, but not by a transformative degree. And it remains to be seen how legible that code will be seen as by posterity.
https://hemath.dev/blog/command-query-separation/
Down at the bottom it gets into composition to make utility functions that compose several operations. Any OO system has to be careful not to expose methods that should have been private, so that’s not specific to CQS. It’s just that the opportunities to get it wrong increase and the consequences are higher.
email.bulkSend(generateExpiryEmails(getExpiredUsers(db.getUsers(), Date.now())));
Many times, it has confused my co-workers when an error creeps in in regards to where is the error happening and why? Of course, this could just be because I have always worked with low effort co-workers, hard to say.
I have to wonder if programming should have kept pascals distinction between functions that only return one thing and procedures that go off and manipulate other things and do not give a return value.
What makes it hard to reason about is that your code is one-dimensional, you have functions like `getExpiredUsers` and `generateExpiryEmails` which could be expressed as composition of more general functions. Here is how I would have written it in JavaScript:
const emails = db.getUsers()
.filter(user => user.isExpired(Date.now())) // Some property every user has
.map(generateExpiryEmail); // Maps a single user to a message
email.bulkSend(emails);
The idea is that you have small but general functions, methods and properties and then use higher-order functions and methods to compose them on the fly. This makes the code two-dimensional. The outer dimension (`filter` and `map`) tells the reader what is done (take all users, pick out only some, then turn each one into something else) while the outer dimension tells you how it is done. Note that there is no function `getExpiredUsers` that receives all users, instead there is a simple and more general `isExpired` method which is combined with `filter` to get the same result.In a functional language with pipes it could be written in an arguably even more elegant design:
db.getUsers() |> filter(User.isExpired(Date.now()) |> map(generateExpiryEmail) |> email.bulkSend
I also like Python's generator expressions which can express `map` and `filter` as a single expression: email.bulk_send(generate_expiry_email(user) for user in db.get_users() if user.is_expired(Date.now())Question. If you want to do one email for expired users and another for non expired users and another email for users that somehow have a date problem in their data....
Do you just do the const emails =
three different times?
In my coding world it looks a lot like doing a SELECT * ON users WHERE isExpired < Date.now
but in some cases you just grab it all, loop through it all, and do little switches to do different things based on different isExpired.
db.getUsers()
|> getExpiredUsers(Date.now())
|> generateExpiryEmails()
|> email.bulkSend()
I think Elixir hits the nail on the head when it comes to finding the right balance between functional and imperative style code.clock in this case is a thing that was supplied to the class or function. It could just be a function: () -> Instant.
(Setting a global mock clock is too evil, so don't suggest that!)
bulk_send(
generate_expiry_email(user)
for user in db.getUsers()
if is_expired(user, date.now())
)
(...Just another flavour of syntax to look at)What you want is to use a language that has higher-kinded types and monads so that functions can have both effects (even multiple distinct kinds of effects) and return values, but the distinction between the two is clear, and when composing effectful functions you have to be explicit about how they compose. (You can still say "run these three possibly-erroring functions in a pipeline and return either the successful result or an error from whichever one failed", but you have to make a deliberate choice to).
Having a language where "func" defines a pure function and "proc" defines a procedure that can performed arbitrary side effects (as in any imperative language really) would still be really useful, I think
var users = db.getUsers();
var expiredUsers = getExpiredUsers(users, Date.now());
var expiryEmails = generateExpiryEmails(expiredUsers);
email.bulkSend(expiryEmails);
This is not only much easier to read, it's also easier to follow in a stack trace and it's easier to debug. IMO it's just flat out better unless you're code golfing.
I'd also combine the first two steps by creating a DB query that just gets expired users directly rather than fetching all users and filtering them in memory:
expiredUsers = db.getExpiredUsers(Date.now());
Now I'm probably mostly getting zero or a few users rather than thousands or millions.
This is actually closer to the way the first draft of this article was written. Unfortunately, some readability was lost to make it fit on a single page. 100% agree that a statement like this is harder to reason about and should be broken up into multiple statements or chained to be on multiple lines.
The rule I was raised with was: you write the code once and someone in the future (even your future self) reads it 100 times.
You win nothing by having it all smashed together like sardines in a tin. Make it work, make it efficient and make it readable.
Result<Users> userRes = getExpiredUsers(db);
if(isError(userRes)) {
return userRes.error;
}
/* This probably wouldn't actually need to return a Result IRL */
Result<Email> emailRes = generateExpireyEmails(userRes.value);
if(isError(emailRes)) {
return emailRes.error;
}
Result<SendResult> sendRes = sendEmails(emailRes.value);
if(isError(sendRes)) {
return sendRes.error;
}
return sendRes; // successful value, or just return a Unit type.
This is in my "functional C++" style, but you can write pipe helpers which sort of do the same thing: Result<SendResult> result = pipe(getExpiredUsers(db))
.then(generateExpireyEmails)
.then(sendEmails)
.result();
if(isError(result)) {
return result.error;
}
If an error result is returned by any of the functions, it terminates immediately and returns the error there. You can write this in most languages, even imperative/oop languages. In java, they have a built in class called Optional with options to treat null returns as empty: Optional.ofNullable(getExpiredUsers(db))
.map(EmailService::generateExpireyEmails)
.map(EmailService::sendEmails)
.orElse(null);
or something close to that, I haven't used java in a couple years.C++ also added a std::expected type in C++23:
auto result = some_expected()
.and_then(another_expected)
.and_then(third_expected)
.transform(/* ... some function here, I'm not familiar with the syntax*/); expiry_date = DateTime.now!("Etc/UTC")
query =
from u in User,
where:
u.expiry_date > ^expiry_date
and u.expiry_email_sent == false,
select: u
MyAppRepo.all(query)
|> Enum.map(u, &generate_expiry_emails(&1, expiry_date))
|> Email.bulkSend() # Returns {:ok, %User{}} or {:err, _reason}
|> Enum.filter(fn
{:ok, _} -> true
_ -> false
end)
|> Enum.map(fn {:ok, user} ->
User.changeset(user, %{expiry_email_sent: true})
|> Repo.update()
end)
Mainly a lot of these examples do the expiry filtering on the application side instead of the database side, and most would send expiry emails multiple times which may or may not be desired behavior, but definitely isn't the best behavior if you automatically rerun this job when it fails.----
Edit: I actually see a few problems with this, too, since Email.bulkSend probably shouldn't know about which user each email is for. I always see a small impedance mismatch with this sort of pipeline, since if we sent the emails individually it would be easy to wrap it in a small function that passes the user through on failure.
If I were going to build a user contacting system like this I would probably want a separate table tracking emails sent, and I think that the email generation could be made pure, the function which actually sends email should probably update a record including a unique email_type id and a date last sent, providing an interface like: `send_email(user_query, email_id, email_template_function)`
Generally you'd distinguish which function call introduces the error with the function call stack, which would include the location of each function's call-site, so maybe the "low-effort" label is accurate. But I could see a benefit in immediately knowing which functions are "pure" and "impure" in terms of manipulating non-local state. I don't think it changes any runtime behavior whatsoever, really, unless your runtime schedules function calls on an async queue and relies on the order in code for some reason.
My verdict is, "IDK", but worth investigating!
I vaguely remember the problem was one function returned a very structured array dealing with regex matches. But there was something wrong with the regex where once in a blue moon, it returned something odd.
So, the chained functions did not error. It just did something weird.
Whenever weird problems would pop up, it was always passed to me. And when I looked at it, I said, well...
I am going to rewrite this chain into steps and debug each return. Then run through many different scenarios and that was how I figured out the regex was not quite correct.
I don't get how you got there from parent comment.
Pascal just went with a needless syntax split of (side-effectful) methods and (side-effectful) functions.
_unitOfWork.Begin();
var users = await _usersRepo.Load(u => u.LastLogin <= whateverDate);
users.CheckForExpiry();
_unitOfWork.Commit();
That then writes the "send expiry email" commands from the aggregate, to an outbox, which a worker then picks up to send. Simple, transactional domain logic.(Clojure) ;; Nested function calls (map double (filter even? '(1 2 3 4)))
;; Using the thread-last macro (->> '(1 2 3 4) (filter even?) ; The list is passed as the last argument (map double)) ; The result of filter is passed as the last argument ;=> (4.0 8.0)
Things like this have been added to python via a library (Pipe) [1] and there is a proposal to add this to JavaScript [2]
1: https://pypi.org/project/pipe/ 2: https://github.com/tc39/proposal-pipeline-operator
email.sendBulk(generateExpiryEmails(db.getUsers(), Date.now()));
We should never be too extreme on anything, otherwise it would turn good into bad.
Still smells like in such a case the developer avoids the complications of abstraction or OOP by making the user deal with it. That's bad API design due to putting ideology before practicality or ergonomics.
No one _should_ do that, but that's a common enough problem (that usually doesn't get found until code is running in production). I suspect with the rise of vibe coding, it's going to happen more and more.
(Also, real-life systems of course do things inefficiently all the time)
In what application would you load all users into memory from database and then filter them with TypeScript functions? And that is the problem with the otherwise sound idea "Functional core, imperative shell". The shell penetrates the core.
Maybe some filters don't match the way database is laid out, what if you have a lot of users, how do you deal with email batching and error handing?
So you have to write the functional core with the side effect context in mind, for example using query builder or DSL that matches the database conventions. Then weave it with the intricacies of your email sender logic, maybe you want iterator over the right size batches of emails to send at once, can it send multiple batches in parallel?
Generally, performance is a top cause of abstraction leaks and the emergence of less-than-beautiful code. On an infinitely powerful machine it would be easy and advisable to program using neat abstracrions, using purely "the language of" the business. Our machines are not infinitely powerful, and that is especially evident when larger data sets are involved. That's where, to achieve useful performance, you have to increasingly speak "the language of" the machine. This is inevitable, and the big part of the programmer's skill is to be able to speak both "languages", to know when to speak which one, and produce readable code regardless.
Database programming is a prime example. There's a reason, for example, why ORMs are very messy and constitute such excellent footguns: they try to gap this bridge, but inevitably fail in important ways. And having and ORM in the example would, most likely, violate the "functional core" principle from the article.
So it looks like the author accidentally presented a very good counterexample to their own idea. I like the idea though, and I would love to know how to resolve the issue.
You’d be surprised! I have worked on a legacy PHP service which did something very similar
email.bulkSend(generateReminderEmails(getExpiredUsers(db.getUsers(), fiveDaysFromNow)));
get all users and then filter out the few that will expire in 5 days, on a code level? That doesn't sound like it would scaleBut even if not, the example from the article is just hypothetical. `db.getUsers()` could be something that just retrieves rather efficient `[UserEmail, ExpiryTime]` pairs, and then you'd have to have a pretty enormous user base for it to not scale (a couple of million string/date pairs in memory should be no problem).
I fixed a performance issue at some point where a missing index meant a scan of millions of rows every login. It worked, and could log in 3 people per second or so. It was still terrible code.
Replace it with `getUsers(filters)` or even a specialised function, and it starts making more sense.
It's exactly this - I do regret using "db" a bit now after reading all of the comments here, as it's taken away focus from the main point. But yes, the post had to fit on a single page, and I needed to pick something that most engineers would be familiar with.
If you pick and recommend a pattern where filtering should happen separately from retrieving, all your code will be bad
Give a man a fish / teach a man to fish, but bad.
Of course by "invented" I mean that far smarter people than me probably invented it far earlier, kinda like how I "invented" intrusive linked lists in my mid-teens to manage the set of sprites for a game. The idea came from my head as the most natural solution to the problem. But it did happen well before the programming blogosphere started making the pattern popular.
It is not, it is being very specific about what it means and what it is referring to
We have tens of thousands of lines of code for the platform and millions of workflow runs through them with no production errors coming from the core agent runtime which manages workflow state, variables, rehydration (suspend + resume). All of the errors and fragility are at the imperative shell (usually integrations).
Some of the examples in this thread I think get it wrong.
db.getUsers() |> filter(User.isExpired(Date.now()) |> map(generateExpiryEmail) |> email.bulkSend
This is already wrong because the call already starts with I/O; flip it and it makes a lot more sense.What you really want is (in TS, as an example):
bulkSend(
userFn: () => user[],
filterFn: (user: User) => bool,
expiryEmailProducerFn: (user: User) => Email,
senderFn: (email: Email) => string
)
The effect of this is that the inner logic of `bulkSend` is completely decoupled from I/O and external logic. Now there's no need for mocking or integration tests because it is possible to use pure unit tests by simply swapping out the functions. I can easily unit test `bulkSend` because I don't need to mock anything or know about the inner behavior.I chose this approach because writing integration tests with LLM calls would make the testing run too slowly (and costly!) so most of the interaction with the LLM is simply a function passed into our core where there's a lot of logic of parsing and moving variables and state around. You can see here that you no longer need mocks and no longer need to spy on calls because in the unit test, you can pass in whatever function you need and you can simply observe if the function was called correctly without a spy.
It is easier than most folks think to adopt -- even in imperative languages -- by simply getting comfortable working with functions at the interfaces of your core API. Wherever you have I/O or a parameter that would be obtained from I/O (database call), replace it with a function that returns the data instead. Now you can write a pure unit test by just passing in a function in the test.
I am very surprised how many of the devs on the team never write code that passes a function down.
Even Python examples in trainings that look functional might not be. They put the function calls in as arguments. The beginner thinks the function returns some data, that would be in a variable, and they are implicitly passing that variable. Might as well, for readability, do the function call first to pass a well-named variable instead.
That was my experience. That plus minimizing side effects in functions. I've yet to really learn functional programming where I'd think to pass a function in an API. What are the best articles or books for us to learn that in general or in Python?
The book Architecture Patterns in Python by Percival and Gregory is one of the few books that talks about this stuff using Python. It's available online and been posted on HN a few times before.
What if a FCF (functional core function) calls another FCF which calls another FCF? Or do we do we rule out such calls?
Object Orientation is only a skin-deep thing and it boils down to functions with call stack. The functions, in turn, boil down to a sequenced list of statements with IF and GOTO here and there. All that boils boils down to machine instructions.
So, at function level, it's all a tree of calls all the way down. Not just two layers of crust and core.
You’ll find usually that side effect in imperative actions is usually tied to the dependencies (database, storage, ui, network connections). It can be quite easy to isolate those dependencies then.
It’s ok to have several layers of core. But usually, it’s quite easy to have the actual dependency tree with interfaces and have the implementation as leaves for each node. But the actual benefits is very easy testing and validation. Also fast feedback due to only unit tests is needed for your business logic.
I also see that lately "code quality" is the least concern of most (even software product) companies, just ask AI to write code in a single file / module / class - then launch feature and fix if you have to. I could see that in a few years things will be extremely messy (but who can say).
There's a link with more info at the top. I'm not sure why this one in particular made it to the front page of HN.
Have to ship it non matter what.
Hopefully by 2045 these ideas will have gotten a little more traction.
Or is it that the example in the article is a bit poor?
> But personally, I don't use tell-dont-ask. I do look to co-locate data and behavior, which often leads to similar results. One thing I find troubling about tell-dont-ask is that I've seen it encourage people to become GetterEradicators, seeking to get rid of all query methods.
[0] https://journal.stuffwithstuff.com/2015/02/01/what-color-is-...
I mean, what if you want to do IO and have mutable data structures inside a do block? I'm afraid I'm going to have to prescribe you a monad transformer. Be careful of the side effects.
[1] https://en.wikipedia.org/wiki/Hexagonal_architecture_(softwa...
[0]: https://www.destroyallsoftware.com/talks/boundaries
[1]: https://www.destroyallsoftware.com/screencasts/catalog/funct...
Hex is kind of a PITA for ground up projects, but if you are doing something where you know multi-platform/cloud/device whatever is important it is cool.
Some things are flat out imperative in nature. Open/close/acquire/release all come to mind. Yes, the RAI pattern is nice. But it seems to imply the opposite? Functional shell over an imperative core. Indeed, the general idea of imperative assembly comes to mind as the ultimate "core" for most software.
Edit: I certainly think having some sort of affordance in place to indicate if you are in different sections is nice.
It can be done "functionally" but doesn't necessarily have to be done in an FP paradigm to use this pattern.
There are other strategies to push resource handling to the edges of the program: pools, allocators, etc.
Consider your basic point of sale terminal. They get a payment token from your provider using the chip, but they don't resolve the transaction with your card/chip still inserted. I don't know any monad trick that would let that general flow appear in a static piece of the code?
That's not what functional core, imperative shell means though. It's a given that CPUs aren't functional. The advice is for people programming in languages that have expressions - ruby, in the case of the original talk. The functional paradigm mostly assumes automatic memory management.
I'm sympathetic to the idea, as you can see it in most instruction manuals that people are likely to consume. The vast majority of which (all of them?) are imperative in nature. There is something about the "for the humans" layer being imperative. Step by step, if you will.
I don't know that it fully works, though. I do think you are well served being consistent in how you layer something. Where all code at a given layer should probably stick to the same styles. But knowing which should be the outer and which the inner? I'm not clear that we have to pick, here. Feel free to have more than two layers. :D
Indeed. It's all well and good to impart some kind of flavour into your code and call it functional, but transactions do not give a crap about style.
A transaction needs to be able to 'back out' to fulfill 'all-or-nothing' semantics. Side effects are what make this impossible.
I always feel like I have to "maintain" code so I usually get bored after 3k lines of code, but truth is code doesn't have to be maintained if we like it the way it is, which obviously includes all the functionality that comes with it.
I mean it's not much, but the concept just resonates with me and I want to share it. Sad I can't share even simple opinion nowadays ...