I think Accessors.jl has a quite nice and usable implementation of lenses, it's something I use a lot even in code where I'm working with a lot of mutable data because it's nice to localize and have exact control over what gets mutated and when (and I often find myself storing some pretty complex immutable data in more 'simple' mutable containers)
https://github.com/smithzvk/modf
You use place syntax like what is used with incf or setf, denoting part of some complex object. But the modification is made to the corresponding part of a copy of the object, and the entire new object is returned.
https://www.inkandswitch.com/cambria/
Not sure how "A Lens allows to access or replace deeply nested parts of complicated objects." is any different from writing a function to do the same?
Julia curious, very little experience
In the end it is really just function composition but in a very concise and powerful way.
In your cambria example the lens is defined as yaml. So this yaml needs to be parsed and interpreted and the applied to the target data. The rules that are allowed to be used in the yaml format must be defined somewhere. With pure functional lenses the same kind of transformation rules can be defined just by function composition of similar elemental rules that are itself only pairs of functions.
> So this yaml needs to be parsed and interpreted and the applied to the target data. The rules that are allowed to be used in the yaml format must be defined somewhere.
I wasn't trying to get into the specific technology. The Julia still needs to be parse, and while Yaml has them separate, CUE does not (which is where I write things like this and have building blocks for lenses [1], in the conceptual sense)
In the conceptual sense, or at least an example of one, lenses are about moving data between versions of a schema. It sounds like what you are describing is capable of this as well? (likely among many other things both are capable of)
[1] https://hofstadter.io/getting-started/data-layer/#checkpoint...
For example in Lombok, the @Data annotation will create a getter and a setter for every private member, and @Getter and @Setter will do the individual methods respectively.
Annotating a class will do every private member, or you can annotate a specific member.
A lens is a shortcut to making a getter/setter for something several elements deep, where instead of calling:
`parentObject.getChild().setChildAttribute()` you can call: `parentObject.setChildAttributeViaLens()`
and not need to write multiple functions in both classes, or even use multiple annotations.
Lenses are an embedded dsl for doing this via syntax that reads similar to to the mutable variant. Additionally it allows to compose many of such transformations.
As a dilettante at programming language design, I have my own toy language. It uses exclusively immutable data structures (C++ "immer"). I present it to the programmer as simple value semantics. `obj.foo[5].bar.a = 2` works and sets `obj` to a new structure where the path through `foo`, `bar`, to `a` has all been rewritten. Since I put it in as a language feature, users don't have to learn about lenses. Why isn't this technique more common in programming language design? Is it so offensive that the syntax `obj.a = 2` ends up rebinding `obj` itself? The rule in my language is that assignment rebinds the leftmost "base" of a chain of `.` field and `[]` array index accesses on the LHS. I'm ignorant of the theory or practical consideration that might lead a competent designer not to implement it this way.
Without having any expertise in the matter, I'd guess that mutability has the advantage of performance and efficient handling of memory.
obj.foo[5].bar.a = 2
An immutable interpretation of this would involve producing new objects and arrays, moving or copying values.Another possible advantage of the mutable default is that you can pass around references to inner values.
> Is it so offensive that the syntax `obj.a = 2` ends up rebinding `obj` itself?
That does imply that the `obj` binding itself is mutable, so if you are trying for entirely immutable data structures (by default), you do probably want to avoid that.
This is why the syntax sugar, in languages that have been exploring syntax sugar for it, starts to look like:
let newObj = { obj with a = 2 }
You still want to be able to name the new immutable binding.One way to think of the goal of functional paradigm is to allow extreme modularity (reuse) with minimal boilerplate [1]. The belief is minimal boilerplace + maximum reuse (not in ad-hoc ways, but using the strict structure of higher-order patterns) leads to easily maintainable bug-free code -- especially in rapidly evolving codebases -- for the one-time cost of understanding these higher-order abstractions. This is why people keep harping on pieces that "compose well". The emphasis on immutability is merely a means to achieve that goal, and lenses are part of the solution to allow great ergonomics (composability) along with immutability. For the general idea, look at this illustrative blog post [2] which rewrites the same small code block ten times -- making it more modular and terse each time.
[1] https://www.cs.kent.ac.uk/people/staff/dat/miranda/whyfp90.p...
[2] https://yannesposito.com/Scratch/en/blog/Haskell-the-Hard-Wa...
Once the language is expressive enough to compose pieces well and write extremely modular code, the next bit that people get excited about is smart compilers that can: transform this to efficient low-level implementations (eg. by fusing accesses), enforce round-trip consistency between get & set lenses (or complain about flaws), etc.
This is a self inflicted problem. Make data public and there is no boilerplate.
Your natural alternative to lenses in imperative languages is usually to just store a reference or pointer to the part you want to modify. Like a lens, but in-place.
Or on second look the sibling comment is probably right and it’s about immutability maybe.
An immutable variable can be savely shared across functions or even threads without copying. It can be created on the stack, heap or in a register, whatever the compiler deems most efficient.
In the case, where you want to change a field of an immutable variable (the use case of lenses), immutable types may still be more efficient, because the variable was stack allocated and copying it is cheap or the compiler can correctly infer, that the original object is not in use anymore and thus reuses the data of the old variable for the new one.
Coming from the C++ world, I think immutability by default is pretty need, because it enables many of the optimisations you would get from C++'s move semantics (or Rust's borrow checker) without the hassle.
Your comment reads like the response of someone who is struggling to understand a concept.
I am thinking of using it for data science work.
Any draw backs? or advantages I should know about?
There was a bit of weirdness with the type system with its dynamic dispatch making things slow, but specifying a type in function headers would resolve those issues.
I also thought that the macro system was pretty nice; for the most part I found creating custom syntax and using their own helpers was pretty nice and easy to grok.
Since I don’t do much data work I haven’t had much of an excuse to use it again, but since it does offer channels and queues for thread synchronization I might be able to find a project to use it for.
But if you are just interested in learning a new language and trying it in data science OR are not currently looking to enter the data science job market, then by all means: Julia is great and in many ways superior to Python for data science.
It’s just that «everyone» is doing data science in Python, and if you’re new to DS, then you should also know Python (but by all means learn Julia too!).
There is only one language that I have an active hatred for, and that is Julia.
Imagine you try to move a definition from one file to another. Sounds like a trivial piece of organization, right?
In Julia, this is a hard problem, and you can wind up getting crashes deep in someone else's code.
The reason is that this causes modules that don't import the new file to have different implementations of the same generic function in scope. Julia features the ability to run libraries on data types they were never designed for. But unlike civilized languages such as C++, this is done by randomly overriding a bunch of functions to do things they were not designed to do, and then hoping the library uses them in a way that produces the result you want. There is no way to guarantee this without reading the library in detail. Also no kind of semantic versioning that can tell you when the library has a breaking change or not, as almost any kind of change becomes a potentially-breaking change when you code like this.
This is a problem unique to Julia.
I brought up to the Julia creators that methods of the same interface should share common properties. This is a very basic principle of generic programming.
One of them responded with personal insults.
I'm not the only one with such experiences. Dan Luu wrote this piece 10 years ago, but the appendix shows the concerns have not been addressed: https://danluu.com/julialang/
Interfaces could be good as intermediaries and it is always great to hear JuliaCon talks every year on the best ways to implement them.
> Imagine you try to move a definition from one file to another. Sounds like a trivial piece of organization, right?
In my experience it’s most trivial. I guess your pain points may have come by making each file as a module and then adding methods for your own types in different module and then moving things around is error prone. The remedy here sometimes is to not make internal modules. However the best solution here is to write integration tests which is a good software development practice anyway.
Can you provide some concrete examples of that issues existing today?
Advantages, it is yet another Lisp like language in a Algol like syntax, like Dylan and Wolfram Alpha, also another one with multi-methods support a la Common Lisp, Dylan, Clojure, and whoever else implements a subset of CLOS.
It was designed from the ground up to be compiled with a JIT, not as an afterthought.
These are the kinds of places making use of it,
Dynamic yet performant language with LISPy features and focus on numerical applications? Count me in.
But then I found out that execution of some ideas is rather bad, and some ideas are great on paper, but not in practice. For example, debugging experience is a joke even compared to SBCL debugger (and you of course need to download package Debugger.jl, because who needs a good debugger in base language implementation?) And multiple dispatch is a very powerful feature... I sometimes think it is too powerful.
There is no proper IDE, and VSC extension was slow and unstable when I tried it (last time few months ago).
But my biggest gripe is with people developing Julia. Throughout the years every time people complained about something ("time to first plot", static compilation etc.) the initial responses were always "you are holding it wrong", "Julia is not intended to be used this way", "just keep you REPL open", "just use this 3rd party package", only to few releases later try to address the problem, sometimes in suboptimal way. It is nice that in the end they try to deliver solutions, but it seems to me it always require constant push from the bottom.
Moreover, I am quite allergic to marketing strategies and hype generation:
Julia doesn't run like C when you write it like Python. It can be very fast, but then it requires quite detailed tuning.
You don't need to think about memory management, until you need to, because otherwise allocations kill your performance.
You can omit types, until you can't.
Those things are quite obvious, but then why produce so much hype and bullshit people through curated and carefully tuned microbenchmarks?
It maybe solves two-language problem, but in return it gives you million packages issue. You need a package to have a tolerable debugger (Debugger.jl), you need a package to have static and performant arrays (StaticArrays.jl), you need a package to have enums worth using, you need a package to hot-reload your code without restarting REPL (Revise.jl), you need a package to compile you code to an executable (PackageCompiler.jl/StaticCompiler.jl, they started to address that in the last release) etc. And then you need to precompile them on your machine to have reasonable startup time.
TLDR: Julia is wasted potential.
Julia is a completely reasonable general purpose language, but getting people to switch generally requires a ~10x better experience, and Julia can't deliver that for general purpose applications.
Edit: It’s not deepcopying the whole struct, just the parts that need to point to something new. So if you update a.b.c, it will shallow-copy a and a.b, but nothing else.